https://wiki.archlinux.org/api.php?action=feedcontributions&user=Agustingianni&feedformat=atomArchWiki - User contributions [en]2024-03-29T15:15:18ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=NVIDIA&diff=76325NVIDIA2009-09-23T21:20:10Z<p>Agustingianni: /* Known Issues */</p>
<hr />
<div>[[Category:Graphics (English)]]<br />
[[Category:X Server (English)]]<br />
[[Category:HOWTOs (English)]]<br />
{{i18n_links_start}}<br />
{{i18n_entry|English|NVIDIA}}<br />
{{i18n_entry|Türkçe|NVIDIA (Türkçe)}}<br />
{{i18n_entry|Italiano|NVIDIA (Italiano)}}<br />
{{i18n_entry|Nederlands|NVIDIA (Nederlands)}}<br />
{{i18n_entry|Русский|NVIDIA (Russian)}}<br />
{{i18n_entry|Česky|NVIDIA (česky)}}<br />
{{i18n_links_end}}<br />
<br />
===How to install Nvidia Driver with pacman===<br />
<br />
====Info from Package Maintainer ''tpowa''====<br />
<br />
The package is for those people who run a stock arch kernel!<br />
I only test with kernel 2.6 and xorg.<br />
<br />
Multiple kernel users:<br />
You need to install nvidia package for each extra!<br />
<br />
====Installing drivers====<br />
<br />
You have to use extra repository, enable it for pacman.<br />
Leave X-Server, else pacman cannot finish installation and it will not work! <br />
As root, run:<br />
pacman -Sy nvidia (for newer cards)<br />
pacman -Sy nvidia-96xx or pacman -Sy nvidia-173xx (for older cards)<br />
<br />
For newer-newer cards you may need to install nvidia-beta from AUR, because stable drivers are not available. (See /usr/share/doc/nvidia/supported-cards.txt in nvidia package if your card is supported.)<br />
<br />
If you use the '''-mm''' kernel, adjust the above command appropriately.<br />
<br />
See the [http://us.download.nvidia.com/XFree86/Linux-x86/173.14.12/README/appendix-a.html <code>README</code>] from nvidia for details on which card is supported by which driver.<br />
<br />
====Configuring X-Server====<br />
<br />
Edit <code>/etc/X11/XF86Config</code> or your <code>/etc/X11/xorg.conf</code> config file:<br />
Disable in modules section:<br />
<code>GLcore</code> and <code>DRI</code><br />
<br />
Add to modules section:<br />
Load "glx"<br />
<br />
Make sure you DON'T have a line<br />
<br />
Load "type1"<br />
<br />
in the Module section since recent versions of xorg-server does not include the type1 font module (completely replaced by freetype).<br />
<br />
Disable <code>Section DRI</code> completely:<br />
#Section "DRI"<br />
# Mode 0666<br />
#EndSection<br />
<br />
Change<br />
Driver "nv"<br />
or<br />
Driver "vesa"<br />
to<br />
Driver "nvidia"<br />
If it exists, disable the Chipset option (only needed for nv driver):<br />
#Chipset "generic"<br />
<br />
This was for basic setup; if you need more tweaking options,<br />
have a look at <code>/usr/share/doc/nvidia/README</code>.<br />
<br />
You can also run:<br />
nvidia-xconfig<br />
<br />
See [http://wiki.archlinux.org/index.php/Xorg7 installing and configuring xorg].<br />
<br />
====Enabling Composite in Xorg====<br />
'''Note:''' As of the NVidia 180.44 driver, support for GLX with the Damage and Composite X extensions is enabled by default.<br />
<br />
Refer to the [[Composite]] wiki for detailed instructions.<br />
<br />
====Modifying Arch <code>rc.conf</code> file====<br />
<br />
Add <code>nvidia</code> to <code>/etc/rc.conf</code> MODULES section (not needed anymore if you run xorg and udev).<br />
Needed for nvidia-71xx and kernel >=2.6.13!<br />
<br />
====Problems that might occur====<br />
<br />
=====Nvidia specific=====<br />
Xorg7:<br />
Please remove your old /usr/X11R6 dir it can cause trouble during installation. Also make sure you've installed <code>pkgconfig</code>. The NVIDIA installer uses pkgconfig to determine where modular Xorg components are installed.<br />
<br />
If you experience slow 3D Performance have a look at<br />
<code>/usr/lib/libGL.so.1</code>, <code>/usr/lib/libGL.so</code>, <code>/usr/lib/libGLcore.so.1</code><br />
Perhaps they are wrong linked to mesa or something else.<br />
Try reinstalling with <code>pacman -S nvidia</code>.<br />
<br />
When you get this message when you try to start an openGL application (for example enemy-territory, or glxgears):<br />
Error: Could not open /dev/nvidiactl because the permissions are too<br />
restrictive. Please see the <code>FREQUENTLY ASKED QUESTIONS</code> <br />
section of <code>/usr/share/doc/NVIDIA_GLX-1.0/README</code> <br />
for steps to correct.<br />
<br />
Add yourself to the <code>video</code> group using <code>gpasswd -a ''yourusername'' video</code> (don't forget to log out and back in, or type: source /etc/profile).<br />
<br />
<br />
While validating modes all are rejected. X loads but at a very low resolution with multiple "ghosted" displays:<br />
<code>Option "ModeValidation" "NoTotalSizeCheck"</code> in Device.<br />
<br />
=====Arch specific=====<br />
<br />
'''x86_64 and lib32-* (stale files problem):'''<br />
If you have previously used the nvidia binary installer, and switch to the packages pacman provides, you will probably run into a problem where 32-bit proprietary GL apps won't start anymore (they throw segfaults now, and you have no idea what went wrong); examples of this are google earth and wine with GL apps. The difficulty here is that the installer stores its files in other places, and you might have stale files in either<br />
/usr/lib64<br />
/usr/lib/tls <br />
/opt/lib32/usr/lib/tls<br />
or similar. The solution is to do a<br />
updatedb; locate libnvidia-tls<br />
then identify the mismatching tls libs, and remove them (if they live in a /tls directory, remove that directory). Contrary to popular belief, the installer does not always find and remove them, and the problem can go unnoticed a long time. If you cannot identify the mismatching versions, just remove nvidia-utils and lib32-nvidia-utils packages on the command line (with pacman -Rd), do the command above again, and you will know which ones have not been in the package. Delete them and reinstall nvidia-utils and lib32-nvidia-utils.<br />
<br />
'''GCC update:'''<br />
You must compile the module with the compiler that was used for the kernel<br />
else it may fail.<br />
A simple <code>pacman -S nvidia</code> should do it, if not wait for a new kernel release and stay with old kernel and gcc.<br />
<br />
'''Kernel update:'''<br />
Kernel updates will require reinstalling the driver.<br />
<br />
====Driver Config Tool====<br />
<br />
The new config tool for the nvidia-drivers is included called 'nvidia-settings'<br />
You don't have to use it it's only a add-on! <br><br />
For more information about the use, have a look at the following file:<br><br />
/usr/share/doc/NVIDIA_GLX-1.0/nvidia-settings-user-guide.txt<br><br />
Please install gtk2 with "pacman -S gtk2" in order to use this tool.<br />
<br />
'''NOTE:'''<br />
If you experience problems like crashing the X-Server while running the tool<br />
you have to delete your <code>.nvidia-settings-rc</code> file in your home directory.<br />
<br />
'''Nvidia-Settings Autostart:'''<br />
You might like to apply the settings chosen using nvidia-settings at startup, firstly run nvidia-settings at least once in order for settings to be restored. The settings file is stored in ~/.nvidia-settings-rc. Then add the following to the auto-startup method of your DE:<br />
nvidia-settings --load-config-only<br />
<br />
====Known Issues====<br />
<br />
If you experience crashes, try to disable <code>RenderAccel "True"</code> option.<br />
<br />
If you have Xorg crashing and complaining about a "conflicting memory type" just add <code>nopat</code> at the end of your kernel line in <code>/boot/grub/menu.lst</code>.<br />
<br />
If you have nvidia installer complaining about different versions of gcc between the current one and the one used for compiling the kernel then see on how to install the traditional way but remember to <code>export IGNORE_CC_MISMATCH=1</code><br />
<br />
If you have Xorg crashing with a "Signal 11" while using nvidia-96xx drivers try disabling PAT. <br />
To disable PAT pass the argument "nopat" to your kernel (edit the GRUB configuration and add the string nopat to the line<br />
that loads your kernel.).<br />
<br />
gr00vy ~ $ cat /boot/grub/menu.lst | grep kernel<br />
kernel /boot/vmlinuz26 '''nopat''' vga=775 root=/dev/disk/by-uuid/1d156a73-4293-4e4b-ac36-54e85243ffe2 ro<br />
<br />
If you have comments on the package please post it here: http://bbs.archlinux.org/viewtopic.php?t=10692<br />
If you have a problem with the drivers have a look at the nvidia forum: http://www.nvnews.net/vbulletin/forumdisplay.php?s=&forumid=14<br />
For a Changelog please look here: http://www.nvidia.com/object/linux_display_ia32_1.0-8762.html<br />
<br />
Note: please don't change the above part without notifying me.<br />
<br />
===Bad performance after installing new nvidia-driver===<br />
<br />
If you experience very slow fps rate in compare with older driver first check if You have Direct Rendering turned on. You can do it by: <br />
<br />
glxinfo | grep direct<br />
<br />
If You get: direct rendering: No then that's your problem. <br />
Next check if You have the same versions of glx for the client and server by this:<br />
<br />
glxinfo | egrep "glx (vendor|version)"<br />
<br />
And if you see different vendors or versions for the client and server run this:<br />
<br />
ln -fs /usr/lib/libGL.so.$VER /usr/X11R6/lib/libGL.so<br />
ln -fs /usr/lib/libGL.so.$VER /usr/X11R6/lib/libGL.so.1<br />
ln -fs /usr/lib/libGL.so.$VER /usr/lib/libGL.so.1.2<br />
<br />
Where $VER is the version of nvidia package, that you're using. You can check it by <code>nvidia-settings</code><br />
<br />
That's all. Now restart your Xserver and you should have normal acceleration.<br />
<br />
<br />
<br />
==Tweaking NVIDIA drivers==<br />
Open <code>/etc/X11/xorg.conf</code> or <code>/etc/X11/XFree86Config</code> with your editor of choice and try the following options to improve performance.<br />
''Not all options may work for your system, try them carefully and always backup your configuration file.''<br />
<br />
===Disable NVIDIA Graphics Logo on startup===<br />
Under <code>Device</code> section add the <code>"NoLogo"</code> Option<br />
Option "NoLogo" "True"<br />
<br />
===Enable hardware acceleration===<br />
Under <code>Device</code> section add the <code>"RenderAccel"</code> Option.<br />
Option "RenderAccel" "True"<br />
<br />
'''NOTE:''' The RenderAccel is enabled by default since drivers version 9746.<br />
<br />
===Override monitor detection===<br />
The <code>"ConnectedMonitor"</code> Option under <code>Device</code> section allows to override the monitor detection when X server starts. This may save a bunch of seconds at start up. The available options are: <code>"CRT"</code> (cathode ray tube), <code>"DFP"</code> (digital flat panel), or <code>"TV"</code> (television).<br />
<br />
The following statement force NVIDIA drivers to use DFP monitors.<br />
Option "ConnectedMonitor" "DFP"<br />
<br />
'''NOTE:''' use "CRT" for all analog 15 pin VGA connections (even if you have a flat panel). "DFP" is intended for DVI digital connections only!<br />
<br />
===Enable TripleBuffer===<br />
Enable the use of triple buffering by adding under <code>Device</code> section the <code>"TripleBuffer"</code> Option.<br />
Option "TripleBuffer" "True"<br />
<br />
Use this option if your GPU has plenty of ram (128mb and more) and combined with "Sync to VBlank". You may enable sync to vblank in nvidia-settings.<br />
<br />
===Enable BackingStore===<br />
This option is used to enable the server's support for backing store, a mechanism by which pixel data for occluded window regions is remembered by the server thereby alleviating the need to send expose events to X clients when the data needs to be redisplayed. BackingStore is not bound to NVIDIA drivers but to X server itself. ATI users would benefit from this option as well.<br />
<br />
Under Device section add:<br />
Option "BackingStore" "True"<br />
<br />
{{Box Note| This option is known to lead to severe unstability and crashes with the new xorg 1.5.3. It is recommended to set it to "False".}}<br />
<br />
===Use OS-level events===<br />
Taken from NVIDIA drivers README file: ''"Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited."'' Whatever it means, it may help improve performance. This option is currently incompatible with SLI and Multi-GPU modes.<br />
<br />
Under <code>Device</code> section add:<br />
Option "DamageEvents" "True"<br />
This option is enabled by default in newer driver.<br />
<br />
===Enable power saving===<br />
... For a greener planet (not strictly related to NVIDIA drivers). Under <code>Monitor</code> section add:<br />
Option "DPMS" "True"<br />
<br />
===Force Powermizer performance level (for laptops)===<br />
In your xorg.conf, add the following to Section "Device"<br />
#force Powermizer to a certain level at all times<br />
# level 0x1 = highest<br />
# level 0x2 = med<br />
# level 0x3 = lowest<br />
Option "RegistryDwords" "PowerMizerLevelAC=0x3"<br />
Option "RegistryDwords" "PowerMizerLevel=0x3"<br />
<br />
*note from brazzmonkey<br />
On my laptop (featuring a Nvidia Geforce Go 7600), I need to set<br />
Option "RegistryDwords" "PerfLevelSrc=0x2222"<br />
into /etc/X11/xorg.cong (in "Device" or "Screen" section). Otherwise I get artefacts, or even display corruption and occasional lockups in KDE4 with OpenGL desktop effects. <br />
<br />
* NVIDIA™ driver for X.org: performance and power saving hints<br />
[http://tutanhamon.com.ua/technovodstvo/NVIDIA-UNIX-driver/ Here] is a page that explains these options quite well.<br />
<br />
===Let the GPU set its own performance level (based on temperature)===<br />
In your xorg.conf, add the following to the Section "Device"<br />
Option "RegistryDwords" "PerfLevelSrc=0x3333"<br />
<br />
===Disable vblank interrupts (for laptops) ===<br />
When running the interrupt detection utility powertop, it is seen that the nvidia driver will generate an interrupt for every vblank. to disable, place in the Device section:<br />
Option "OnDemandVBlankInterrupts" "True"<br />
<br />
This will reduce interrupts to about one or two per second.<br />
<br />
===Enable overclocking via nvidia-settings===<br />
To enable overclocking, place the following line in the "device" section:<br />
Option "Coolbits" "1"<br />
This will enable on the fly overclocking by running nvidia-settings inside X.<br />
<br />
Please note that overclocking may damage your hardware and that no responsibility may be placed on the authors of this page due to any damage to any information technology equipment from operating products out of specifications set by the manufacturer.<br />
<br />
===Enable screen rotation through XRandR===<br />
To enable screen rotation place the following line in the "device" session:<br />
<br />
Option "RandRRotation" "on"<br />
<br />
Restart Xorg, and then type in<br />
<br />
xrandr -o left<br />
<br />
The Screen should be rotated. To restore, type in<br />
<br />
xrandr -o normal<br />
<br />
===Further readings===<br />
* [http://us.download.nvidia.com/XFree86/Linux-x86/177.82/README/appendix-b.html NVIDIA drivers README file] (latest drivers)<br />
* [http://wiki.compiz-fusion.org/Hardware/NVIDIA Compiz Fusion wiki]<br />
<br />
==Using TV-out on your NVIDIA card==<br />
<br />
Good article on the subject can be found from:<br />
http://en.wikibooks.org/wiki/NVidia/TV-OUT<br />
<br />
==Why is the refresh rate not reported correctly by utilities that use the XRandR X extension (e.g., the GNOME "Screen Resolution Preferences" panel, `xrandr -q`, etc)?==<br />
<br />
The XRandR X extension is not presently aware of multiple display devices on a single X screen; it only sees the MetaMode bounding box, which may contain one or more actual modes. This means that if multiple MetaModes have the same bounding box, XRandR will not be able to distinguish between them.<br />
<br />
In order to support DynamicTwinView, the NVIDIA X driver must make each MetaMode appear to be unique to XRandR. Presently, the NVIDIA X driver accomplishes this by using the refresh rate as a unique identifier.<br />
<br />
You can use `nvidia-settings -q RefreshRate` to query the actual refresh rate on each display device.<br />
<br />
The XRandR extension is currently being redesigned by the X.Org community, so the refresh rate workaround may be removed at some point in the future.<br />
<br />
This workaround can also be disabled by setting the "DynamicTwinView" X configuration option to FALSE, which will disable NV-CONTROL support for manipulating MetaModes, but will cause the XRandR and XF86VidMode visible refresh rate to be accurate.<br />
<br />
==How to install NVIDIA Driver with custom kernel==<br />
<br />
It's an advantage to know how the ABS system works by reading some of the other wiki pages about it, first:<br />
* http://wiki.archlinux.org/index.php/ABS<br />
* http://wiki.archlinux.org/index.php/Makepkg<br />
* http://wiki.archlinux.org/index.php/The_Arch_package_making_HOW-TO_-_with_guidelines.<br />
<br />
<br />
We will create our own pacman package quickly by using ABS, which will build the module for the currently running kernel:<br />
<br />
Make sure you have [[ABS]] installed, and the tree generated<br />
pacman -Sy abs<br />
<br />
then, as root:<br />
abs<br />
<br />
Make a temporary directory for creating our new package:<br />
mkdir -p /var/abs/local/<br />
<br />
Make a copy of the nvidia package directory:<br />
cp -r /var/abs/extra/nvidia/ /var/abs/local/<br />
<br />
Set the ownership permissions of the nvidia package directory (in case you are using sudo, makepkg will fail because it can't create the directories, and makepkg yells at you if you run it as root):<br />
<br />
chown -hR <username>:<username> /var/abs/local/nvidia<br />
<br />
Go into our temporary nvidia directory:<br />
cd /var/abs/local/nvidia<br />
<br />
We need to edit the two files nvidia.install and the PKGBUILD file, so they contain the right kernel version variables, so we don't have to move it from the stock kernel /lib/modules/2.6.xx-ARCH directory.<br />
<br />
You can get your kernel version and local version name if you type:<br />
uname -r<br />
<br />
* In nvidia.install replace the KERNEL_VERSION="2.6.xx-ARCH" variable with your kernel version, such as KERNEL_VERSION="2.6.22.6" or KERNEL_VERSION"2.6.22.6-custom" depending on what your kernels version is and local version text/number. Do this for all instances of the version number within this file.<br />
<br />
* In PKGBUILD change the _kernver='2.6.xx-ARCH' variable to match your kernel version again, like above.<br />
* If you have more than one kernel coexisting in parallel with another, (such as a custom kernel alongside the default -ARCH kernel) change the "pkgname=nvidia" variable in the PKGBUILD to a unique identifier, such as nvidia-2622 or nvidia-custom. This will allow both kernels to use the nvidia module, since the custom nvidia module has a different package name and will not overwrite the original.<br />
Then do:<br />
makepkg -i -c<br />
<br />
.. Now it will automatically build the NVIDIA module for your custom kernel and clean up the leftover files from creating the package. Enjoy!<br />
<br />
== If X doesn't start on x86_64 ==<br />
<br />
If, after installing nvidia video drivers, X doesn't start; solve this problem by editing /etc/X11/xorg.conf. In section ServerLayout add:<br />
Option "AutoAddDevices" "false"<br />
<br />
And add in ServerFlags (if it doesn't exist - add it):<br />
Option "AllowEmptyInput" "false"<br />
<br />
== X on TV (DFP) as single monitor==<br />
<br />
If you have a TV as DFP (DVI/HDMI) only, you might want to start the X server on the DFP monitor even if it is still turned off. You have to force the nvidia driver to use the DFP because it looks for all connected monitors, falling back to CRT-0 if none is found. <br />
To force nvidia to use the DFP, even if turned off or otherwise disconnected, you can store a copy of the EDID somewhere on disk and use it instead of reading EDID from the TV/DFP.<br />
<br />
To acquire EDID, you need a tool called nvidia-settings (pacman -Sy nvidia-utils). Run it, it will show you some information in tree format, go to your GPU (for me it's "GPU-0"), click the DFP section ("DFP-0" for me). Click on the "Acquire Edid" Button and store it somewhere (/etc/X11/dfp0.edid).<br />
<br />
Edit /etc/X11/xorg.conf, in the section "Device" add:<br />
Option "ConnectedMonitor" "DFP"<br />
Option "CustomEDID" "DFP-0:/etc/X11/dfp0.edid"<br />
<br />
The "ConnectedMonitor" option forces the driver to "detect" the DFP, even if turned off or something. The "CustomEDID" provides EDID data for this device, it will start up just as if the TV/DFP was connected during X server start.<br />
<br />
This way I can automatically start a display manager at boot time and still have a working and properly configured X screen when I decide to switch on the TV.<br />
<br />
== Laptops: X hangs on login/out, worked around by ctrl+alt+bkspc ==<br />
<br />
If you are using the legacy nvidia drivers and Xorg hangs on login and logout (particularly with an odd screen split into two pieces of black + white or gray), but you can still log in if you ctrl-alt-backspace (or whatever the new "kill X" keybind is) after waiting a few seconds, you probably need this in {{Filename|/etc/modprobe.d/modprobe.conf}}:<br />
<br />
options nvidia NVreg_Mobile=1<br />
<br />
One user had luck with this instead, but it makes performance drop significantly for some:<br />
<br />
options nvidia NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 NVreg_DeviceFileMode=0660 NVreg_SoftEDIDs=0 NVreg_Mobile=1<br />
<br />
Note that NVreg_Mobile there needs to be changed according to what kind of laptop you're using:<br />
<br />
* 1: Dell laptops<br />
* 2: non-Compal Toshiba laptops<br />
* 3: all other laptops<br />
* 4: Compal Toshiba laptops<br />
* 5: Gateway laptops<br />
<br />
See Appendix K of the nvidia readme ([http://http.download.nvidia.com/XFree86/Linux-x86/1.0-7182/README/readme.txt here]) for more information/help.</div>Agustingiannihttps://wiki.archlinux.org/index.php?title=NVIDIA&diff=76324NVIDIA2009-09-23T21:18:09Z<p>Agustingianni: /* Known Issues */ added a way to fix the PAT problem with 96xx drivers on new kernels.</p>
<hr />
<div>[[Category:Graphics (English)]]<br />
[[Category:X Server (English)]]<br />
[[Category:HOWTOs (English)]]<br />
{{i18n_links_start}}<br />
{{i18n_entry|English|NVIDIA}}<br />
{{i18n_entry|Türkçe|NVIDIA (Türkçe)}}<br />
{{i18n_entry|Italiano|NVIDIA (Italiano)}}<br />
{{i18n_entry|Nederlands|NVIDIA (Nederlands)}}<br />
{{i18n_entry|Русский|NVIDIA (Russian)}}<br />
{{i18n_entry|Česky|NVIDIA (česky)}}<br />
{{i18n_links_end}}<br />
<br />
===How to install Nvidia Driver with pacman===<br />
<br />
====Info from Package Maintainer ''tpowa''====<br />
<br />
The package is for those people who run a stock arch kernel!<br />
I only test with kernel 2.6 and xorg.<br />
<br />
Multiple kernel users:<br />
You need to install nvidia package for each extra!<br />
<br />
====Installing drivers====<br />
<br />
You have to use extra repository, enable it for pacman.<br />
Leave X-Server, else pacman cannot finish installation and it will not work! <br />
As root, run:<br />
pacman -Sy nvidia (for newer cards)<br />
pacman -Sy nvidia-96xx or pacman -Sy nvidia-173xx (for older cards)<br />
<br />
For newer-newer cards you may need to install nvidia-beta from AUR, because stable drivers are not available. (See /usr/share/doc/nvidia/supported-cards.txt in nvidia package if your card is supported.)<br />
<br />
If you use the '''-mm''' kernel, adjust the above command appropriately.<br />
<br />
See the [http://us.download.nvidia.com/XFree86/Linux-x86/173.14.12/README/appendix-a.html <code>README</code>] from nvidia for details on which card is supported by which driver.<br />
<br />
====Configuring X-Server====<br />
<br />
Edit <code>/etc/X11/XF86Config</code> or your <code>/etc/X11/xorg.conf</code> config file:<br />
Disable in modules section:<br />
<code>GLcore</code> and <code>DRI</code><br />
<br />
Add to modules section:<br />
Load "glx"<br />
<br />
Make sure you DON'T have a line<br />
<br />
Load "type1"<br />
<br />
in the Module section since recent versions of xorg-server does not include the type1 font module (completely replaced by freetype).<br />
<br />
Disable <code>Section DRI</code> completely:<br />
#Section "DRI"<br />
# Mode 0666<br />
#EndSection<br />
<br />
Change<br />
Driver "nv"<br />
or<br />
Driver "vesa"<br />
to<br />
Driver "nvidia"<br />
If it exists, disable the Chipset option (only needed for nv driver):<br />
#Chipset "generic"<br />
<br />
This was for basic setup; if you need more tweaking options,<br />
have a look at <code>/usr/share/doc/nvidia/README</code>.<br />
<br />
You can also run:<br />
nvidia-xconfig<br />
<br />
See [http://wiki.archlinux.org/index.php/Xorg7 installing and configuring xorg].<br />
<br />
====Enabling Composite in Xorg====<br />
'''Note:''' As of the NVidia 180.44 driver, support for GLX with the Damage and Composite X extensions is enabled by default.<br />
<br />
Refer to the [[Composite]] wiki for detailed instructions.<br />
<br />
====Modifying Arch <code>rc.conf</code> file====<br />
<br />
Add <code>nvidia</code> to <code>/etc/rc.conf</code> MODULES section (not needed anymore if you run xorg and udev).<br />
Needed for nvidia-71xx and kernel >=2.6.13!<br />
<br />
====Problems that might occur====<br />
<br />
=====Nvidia specific=====<br />
Xorg7:<br />
Please remove your old /usr/X11R6 dir it can cause trouble during installation. Also make sure you've installed <code>pkgconfig</code>. The NVIDIA installer uses pkgconfig to determine where modular Xorg components are installed.<br />
<br />
If you experience slow 3D Performance have a look at<br />
<code>/usr/lib/libGL.so.1</code>, <code>/usr/lib/libGL.so</code>, <code>/usr/lib/libGLcore.so.1</code><br />
Perhaps they are wrong linked to mesa or something else.<br />
Try reinstalling with <code>pacman -S nvidia</code>.<br />
<br />
When you get this message when you try to start an openGL application (for example enemy-territory, or glxgears):<br />
Error: Could not open /dev/nvidiactl because the permissions are too<br />
restrictive. Please see the <code>FREQUENTLY ASKED QUESTIONS</code> <br />
section of <code>/usr/share/doc/NVIDIA_GLX-1.0/README</code> <br />
for steps to correct.<br />
<br />
Add yourself to the <code>video</code> group using <code>gpasswd -a ''yourusername'' video</code> (don't forget to log out and back in, or type: source /etc/profile).<br />
<br />
<br />
While validating modes all are rejected. X loads but at a very low resolution with multiple "ghosted" displays:<br />
<code>Option "ModeValidation" "NoTotalSizeCheck"</code> in Device.<br />
<br />
=====Arch specific=====<br />
<br />
'''x86_64 and lib32-* (stale files problem):'''<br />
If you have previously used the nvidia binary installer, and switch to the packages pacman provides, you will probably run into a problem where 32-bit proprietary GL apps won't start anymore (they throw segfaults now, and you have no idea what went wrong); examples of this are google earth and wine with GL apps. The difficulty here is that the installer stores its files in other places, and you might have stale files in either<br />
/usr/lib64<br />
/usr/lib/tls <br />
/opt/lib32/usr/lib/tls<br />
or similar. The solution is to do a<br />
updatedb; locate libnvidia-tls<br />
then identify the mismatching tls libs, and remove them (if they live in a /tls directory, remove that directory). Contrary to popular belief, the installer does not always find and remove them, and the problem can go unnoticed a long time. If you cannot identify the mismatching versions, just remove nvidia-utils and lib32-nvidia-utils packages on the command line (with pacman -Rd), do the command above again, and you will know which ones have not been in the package. Delete them and reinstall nvidia-utils and lib32-nvidia-utils.<br />
<br />
'''GCC update:'''<br />
You must compile the module with the compiler that was used for the kernel<br />
else it may fail.<br />
A simple <code>pacman -S nvidia</code> should do it, if not wait for a new kernel release and stay with old kernel and gcc.<br />
<br />
'''Kernel update:'''<br />
Kernel updates will require reinstalling the driver.<br />
<br />
====Driver Config Tool====<br />
<br />
The new config tool for the nvidia-drivers is included called 'nvidia-settings'<br />
You don't have to use it it's only a add-on! <br><br />
For more information about the use, have a look at the following file:<br><br />
/usr/share/doc/NVIDIA_GLX-1.0/nvidia-settings-user-guide.txt<br><br />
Please install gtk2 with "pacman -S gtk2" in order to use this tool.<br />
<br />
'''NOTE:'''<br />
If you experience problems like crashing the X-Server while running the tool<br />
you have to delete your <code>.nvidia-settings-rc</code> file in your home directory.<br />
<br />
'''Nvidia-Settings Autostart:'''<br />
You might like to apply the settings chosen using nvidia-settings at startup, firstly run nvidia-settings at least once in order for settings to be restored. The settings file is stored in ~/.nvidia-settings-rc. Then add the following to the auto-startup method of your DE:<br />
nvidia-settings --load-config-only<br />
<br />
====Known Issues====<br />
<br />
If you experience crashes, try to disable <code>RenderAccel "True"</code> option.<br />
<br />
If you have Xorg crashing and complaining about a "conflicting memory type" just add <code>nopat</code> at the end of your kernel line in <code>/boot/grub/menu.lst</code>.<br />
<br />
If you have nvidia installer complaining about different versions of gcc between the current one and the one used for compiling the kernel then see on how to install the traditional way but remember to <code>export IGNORE_CC_MISMATCH=1</code><br />
<br />
If you have Xorg crashing with a "Signal 11" while using nvidia-96xx drivers try disabling PAT. <br />
To disable PAT pass the argument "nopat" to your kernel (edit the GRUB configuration and add the string nopat to the line<br />
that loads your kernel.).<br />
<br />
If you have comments on the package please post it here: http://bbs.archlinux.org/viewtopic.php?t=10692<br />
If you have a problem with the drivers have a look at the nvidia forum: http://www.nvnews.net/vbulletin/forumdisplay.php?s=&forumid=14<br />
For a Changelog please look here: http://www.nvidia.com/object/linux_display_ia32_1.0-8762.html<br />
<br />
Note: please don't change the above part without notifying me.<br />
<br />
===Bad performance after installing new nvidia-driver===<br />
<br />
If you experience very slow fps rate in compare with older driver first check if You have Direct Rendering turned on. You can do it by: <br />
<br />
glxinfo | grep direct<br />
<br />
If You get: direct rendering: No then that's your problem. <br />
Next check if You have the same versions of glx for the client and server by this:<br />
<br />
glxinfo | egrep "glx (vendor|version)"<br />
<br />
And if you see different vendors or versions for the client and server run this:<br />
<br />
ln -fs /usr/lib/libGL.so.$VER /usr/X11R6/lib/libGL.so<br />
ln -fs /usr/lib/libGL.so.$VER /usr/X11R6/lib/libGL.so.1<br />
ln -fs /usr/lib/libGL.so.$VER /usr/lib/libGL.so.1.2<br />
<br />
Where $VER is the version of nvidia package, that you're using. You can check it by <code>nvidia-settings</code><br />
<br />
That's all. Now restart your Xserver and you should have normal acceleration.<br />
<br />
<br />
<br />
==Tweaking NVIDIA drivers==<br />
Open <code>/etc/X11/xorg.conf</code> or <code>/etc/X11/XFree86Config</code> with your editor of choice and try the following options to improve performance.<br />
''Not all options may work for your system, try them carefully and always backup your configuration file.''<br />
<br />
===Disable NVIDIA Graphics Logo on startup===<br />
Under <code>Device</code> section add the <code>"NoLogo"</code> Option<br />
Option "NoLogo" "True"<br />
<br />
===Enable hardware acceleration===<br />
Under <code>Device</code> section add the <code>"RenderAccel"</code> Option.<br />
Option "RenderAccel" "True"<br />
<br />
'''NOTE:''' The RenderAccel is enabled by default since drivers version 9746.<br />
<br />
===Override monitor detection===<br />
The <code>"ConnectedMonitor"</code> Option under <code>Device</code> section allows to override the monitor detection when X server starts. This may save a bunch of seconds at start up. The available options are: <code>"CRT"</code> (cathode ray tube), <code>"DFP"</code> (digital flat panel), or <code>"TV"</code> (television).<br />
<br />
The following statement force NVIDIA drivers to use DFP monitors.<br />
Option "ConnectedMonitor" "DFP"<br />
<br />
'''NOTE:''' use "CRT" for all analog 15 pin VGA connections (even if you have a flat panel). "DFP" is intended for DVI digital connections only!<br />
<br />
===Enable TripleBuffer===<br />
Enable the use of triple buffering by adding under <code>Device</code> section the <code>"TripleBuffer"</code> Option.<br />
Option "TripleBuffer" "True"<br />
<br />
Use this option if your GPU has plenty of ram (128mb and more) and combined with "Sync to VBlank". You may enable sync to vblank in nvidia-settings.<br />
<br />
===Enable BackingStore===<br />
This option is used to enable the server's support for backing store, a mechanism by which pixel data for occluded window regions is remembered by the server thereby alleviating the need to send expose events to X clients when the data needs to be redisplayed. BackingStore is not bound to NVIDIA drivers but to X server itself. ATI users would benefit from this option as well.<br />
<br />
Under Device section add:<br />
Option "BackingStore" "True"<br />
<br />
{{Box Note| This option is known to lead to severe unstability and crashes with the new xorg 1.5.3. It is recommended to set it to "False".}}<br />
<br />
===Use OS-level events===<br />
Taken from NVIDIA drivers README file: ''"Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited."'' Whatever it means, it may help improve performance. This option is currently incompatible with SLI and Multi-GPU modes.<br />
<br />
Under <code>Device</code> section add:<br />
Option "DamageEvents" "True"<br />
This option is enabled by default in newer driver.<br />
<br />
===Enable power saving===<br />
... For a greener planet (not strictly related to NVIDIA drivers). Under <code>Monitor</code> section add:<br />
Option "DPMS" "True"<br />
<br />
===Force Powermizer performance level (for laptops)===<br />
In your xorg.conf, add the following to Section "Device"<br />
#force Powermizer to a certain level at all times<br />
# level 0x1 = highest<br />
# level 0x2 = med<br />
# level 0x3 = lowest<br />
Option "RegistryDwords" "PowerMizerLevelAC=0x3"<br />
Option "RegistryDwords" "PowerMizerLevel=0x3"<br />
<br />
*note from brazzmonkey<br />
On my laptop (featuring a Nvidia Geforce Go 7600), I need to set<br />
Option "RegistryDwords" "PerfLevelSrc=0x2222"<br />
into /etc/X11/xorg.cong (in "Device" or "Screen" section). Otherwise I get artefacts, or even display corruption and occasional lockups in KDE4 with OpenGL desktop effects. <br />
<br />
* NVIDIA™ driver for X.org: performance and power saving hints<br />
[http://tutanhamon.com.ua/technovodstvo/NVIDIA-UNIX-driver/ Here] is a page that explains these options quite well.<br />
<br />
===Let the GPU set its own performance level (based on temperature)===<br />
In your xorg.conf, add the following to the Section "Device"<br />
Option "RegistryDwords" "PerfLevelSrc=0x3333"<br />
<br />
===Disable vblank interrupts (for laptops) ===<br />
When running the interrupt detection utility powertop, it is seen that the nvidia driver will generate an interrupt for every vblank. to disable, place in the Device section:<br />
Option "OnDemandVBlankInterrupts" "True"<br />
<br />
This will reduce interrupts to about one or two per second.<br />
<br />
===Enable overclocking via nvidia-settings===<br />
To enable overclocking, place the following line in the "device" section:<br />
Option "Coolbits" "1"<br />
This will enable on the fly overclocking by running nvidia-settings inside X.<br />
<br />
Please note that overclocking may damage your hardware and that no responsibility may be placed on the authors of this page due to any damage to any information technology equipment from operating products out of specifications set by the manufacturer.<br />
<br />
===Enable screen rotation through XRandR===<br />
To enable screen rotation place the following line in the "device" session:<br />
<br />
Option "RandRRotation" "on"<br />
<br />
Restart Xorg, and then type in<br />
<br />
xrandr -o left<br />
<br />
The Screen should be rotated. To restore, type in<br />
<br />
xrandr -o normal<br />
<br />
===Further readings===<br />
* [http://us.download.nvidia.com/XFree86/Linux-x86/177.82/README/appendix-b.html NVIDIA drivers README file] (latest drivers)<br />
* [http://wiki.compiz-fusion.org/Hardware/NVIDIA Compiz Fusion wiki]<br />
<br />
==Using TV-out on your NVIDIA card==<br />
<br />
Good article on the subject can be found from:<br />
http://en.wikibooks.org/wiki/NVidia/TV-OUT<br />
<br />
==Why is the refresh rate not reported correctly by utilities that use the XRandR X extension (e.g., the GNOME "Screen Resolution Preferences" panel, `xrandr -q`, etc)?==<br />
<br />
The XRandR X extension is not presently aware of multiple display devices on a single X screen; it only sees the MetaMode bounding box, which may contain one or more actual modes. This means that if multiple MetaModes have the same bounding box, XRandR will not be able to distinguish between them.<br />
<br />
In order to support DynamicTwinView, the NVIDIA X driver must make each MetaMode appear to be unique to XRandR. Presently, the NVIDIA X driver accomplishes this by using the refresh rate as a unique identifier.<br />
<br />
You can use `nvidia-settings -q RefreshRate` to query the actual refresh rate on each display device.<br />
<br />
The XRandR extension is currently being redesigned by the X.Org community, so the refresh rate workaround may be removed at some point in the future.<br />
<br />
This workaround can also be disabled by setting the "DynamicTwinView" X configuration option to FALSE, which will disable NV-CONTROL support for manipulating MetaModes, but will cause the XRandR and XF86VidMode visible refresh rate to be accurate.<br />
<br />
==How to install NVIDIA Driver with custom kernel==<br />
<br />
It's an advantage to know how the ABS system works by reading some of the other wiki pages about it, first:<br />
* http://wiki.archlinux.org/index.php/ABS<br />
* http://wiki.archlinux.org/index.php/Makepkg<br />
* http://wiki.archlinux.org/index.php/The_Arch_package_making_HOW-TO_-_with_guidelines.<br />
<br />
<br />
We will create our own pacman package quickly by using ABS, which will build the module for the currently running kernel:<br />
<br />
Make sure you have [[ABS]] installed, and the tree generated<br />
pacman -Sy abs<br />
<br />
then, as root:<br />
abs<br />
<br />
Make a temporary directory for creating our new package:<br />
mkdir -p /var/abs/local/<br />
<br />
Make a copy of the nvidia package directory:<br />
cp -r /var/abs/extra/nvidia/ /var/abs/local/<br />
<br />
Set the ownership permissions of the nvidia package directory (in case you are using sudo, makepkg will fail because it can't create the directories, and makepkg yells at you if you run it as root):<br />
<br />
chown -hR <username>:<username> /var/abs/local/nvidia<br />
<br />
Go into our temporary nvidia directory:<br />
cd /var/abs/local/nvidia<br />
<br />
We need to edit the two files nvidia.install and the PKGBUILD file, so they contain the right kernel version variables, so we don't have to move it from the stock kernel /lib/modules/2.6.xx-ARCH directory.<br />
<br />
You can get your kernel version and local version name if you type:<br />
uname -r<br />
<br />
* In nvidia.install replace the KERNEL_VERSION="2.6.xx-ARCH" variable with your kernel version, such as KERNEL_VERSION="2.6.22.6" or KERNEL_VERSION"2.6.22.6-custom" depending on what your kernels version is and local version text/number. Do this for all instances of the version number within this file.<br />
<br />
* In PKGBUILD change the _kernver='2.6.xx-ARCH' variable to match your kernel version again, like above.<br />
* If you have more than one kernel coexisting in parallel with another, (such as a custom kernel alongside the default -ARCH kernel) change the "pkgname=nvidia" variable in the PKGBUILD to a unique identifier, such as nvidia-2622 or nvidia-custom. This will allow both kernels to use the nvidia module, since the custom nvidia module has a different package name and will not overwrite the original.<br />
Then do:<br />
makepkg -i -c<br />
<br />
.. Now it will automatically build the NVIDIA module for your custom kernel and clean up the leftover files from creating the package. Enjoy!<br />
<br />
== If X doesn't start on x86_64 ==<br />
<br />
If, after installing nvidia video drivers, X doesn't start; solve this problem by editing /etc/X11/xorg.conf. In section ServerLayout add:<br />
Option "AutoAddDevices" "false"<br />
<br />
And add in ServerFlags (if it doesn't exist - add it):<br />
Option "AllowEmptyInput" "false"<br />
<br />
== X on TV (DFP) as single monitor==<br />
<br />
If you have a TV as DFP (DVI/HDMI) only, you might want to start the X server on the DFP monitor even if it is still turned off. You have to force the nvidia driver to use the DFP because it looks for all connected monitors, falling back to CRT-0 if none is found. <br />
To force nvidia to use the DFP, even if turned off or otherwise disconnected, you can store a copy of the EDID somewhere on disk and use it instead of reading EDID from the TV/DFP.<br />
<br />
To acquire EDID, you need a tool called nvidia-settings (pacman -Sy nvidia-utils). Run it, it will show you some information in tree format, go to your GPU (for me it's "GPU-0"), click the DFP section ("DFP-0" for me). Click on the "Acquire Edid" Button and store it somewhere (/etc/X11/dfp0.edid).<br />
<br />
Edit /etc/X11/xorg.conf, in the section "Device" add:<br />
Option "ConnectedMonitor" "DFP"<br />
Option "CustomEDID" "DFP-0:/etc/X11/dfp0.edid"<br />
<br />
The "ConnectedMonitor" option forces the driver to "detect" the DFP, even if turned off or something. The "CustomEDID" provides EDID data for this device, it will start up just as if the TV/DFP was connected during X server start.<br />
<br />
This way I can automatically start a display manager at boot time and still have a working and properly configured X screen when I decide to switch on the TV.<br />
<br />
== Laptops: X hangs on login/out, worked around by ctrl+alt+bkspc ==<br />
<br />
If you are using the legacy nvidia drivers and Xorg hangs on login and logout (particularly with an odd screen split into two pieces of black + white or gray), but you can still log in if you ctrl-alt-backspace (or whatever the new "kill X" keybind is) after waiting a few seconds, you probably need this in {{Filename|/etc/modprobe.d/modprobe.conf}}:<br />
<br />
options nvidia NVreg_Mobile=1<br />
<br />
One user had luck with this instead, but it makes performance drop significantly for some:<br />
<br />
options nvidia NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 NVreg_DeviceFileMode=0660 NVreg_SoftEDIDs=0 NVreg_Mobile=1<br />
<br />
Note that NVreg_Mobile there needs to be changed according to what kind of laptop you're using:<br />
<br />
* 1: Dell laptops<br />
* 2: non-Compal Toshiba laptops<br />
* 3: all other laptops<br />
* 4: Compal Toshiba laptops<br />
* 5: Gateway laptops<br />
<br />
See Appendix K of the nvidia readme ([http://http.download.nvidia.com/XFree86/Linux-x86/1.0-7182/README/readme.txt here]) for more information/help.</div>Agustingianni