Difference between revisions of "NVIDIA (日本語)"

From ArchWiki
Jump to: navigation, search
m (remove needless attributes from wiki tables and/or use class="wikitable")
Line 485: Line 485:
 
The following table presents the available rendering modes.
 
The following table presents the available rendering modes.
  
{| border="1"
+
{| class="wikitable"
 
! Value !! Behavior
 
! Value !! Behavior
 
|-
 
|-

Revision as of 10:01, 16 March 2014

Template:Related articles start (日本語)

  • Nouveau
  • Bumblebee
  • NVIDIA Optimus
  • Xorg
  • </ul></div>

    この記事は、NVIDIAプロプライエタリなグラフィックカードドライバのインストールと設定をカバーしています。オープンソースのドライバについての情報に関しては、 Nouveau を見てください。NVIDIA Optimus の技術が利用できるラップトップを使っている場合は NVIDIA Optimus を見て下さい。

    Contents

    インストール

    以下は、標準の linux パッケージを使っている人向けの指示です。カスタムカーネルを使っている場合のセットアップは、次のサブセクションまで読み飛ばしてください。

    Tip: NVIDIA ドライバをインストールするときは NVIDIA のサイトで提供されているパッケージよりも pacman を使った方が基本的に有益です。なぜなら、そうすることで、システムをアップデートした際にドライバもアップデートすることができるからです。

    1. あなたの使っているグラフィックカードが何かわからない場合は、次を実行することで確認してください:

    # lspci -k | grep -A 2 -i "VGA"

    2. あなたのカードに合わせて適切なドライバーをインストールしてください:

    • GeForce 8 シリーズとそれよりも新しいカード [NVC0 以降] の場合は、公式リポジトリにある nvidia パッケージをインストールしてください。
    • GeForce 6/7 シリーズのカード [NV40-NVAF] の場合は、公式リポジトリにある nvidia-304xx パッケージをインストールしてください。
    • GeForce 5/FX シリーズのカード [NV30-NV38] の場合は、AUR にある nvidia-173xxAUR パッケージをインストールしてください。
    • GeForce 2/3/4 MX/Ti シリーズのカード [NV11 と NV17-NV28] の場合は、AUR にある nvidia-96xxAUR パッケージをインストールしてください。
    Tip: よくわからないときは、NVIDIA の ドライバーダウンロードサイト に行ってカードに適切なドライバーを見つけて下さい。また、レガシーなカードリストnouveau wiki のコードネームのページ も使って下さい。
    一番最新の GPU モデルを使う場合、安定版のドライバーでは新しく導入された機能をサポートしていないために、Arch User Repository から nvidia-betaAUR をインストールする必要があるかもしれません。
    マシンが64ビット環境で32ビットの OpenGL サポートが必要な場合は、multilib リポジトリから相当する lib32 パッケージもインストールしてください (例: lib32-nvidia-libgllib32-nvidia-{304xx,173xx,96xx}-utils)。
    Tip: レガシーな nvidia-96xx と nvidia-173xx ドライバーは非公式の [city] リポジトリ からもインストールできます。

    3. 再起動してください。nvidia パッケージには nouveau モジュールをブラックリスト化するファイルが含まれているため、再起動が必須になります。

    ドライバーをインストールしたら、続けて設定へと進んでください。

    Alternate install: カスタムカーネル

    まず最初に、ABS システムがどのような働きをするのかを他の記事をいくつか読んで知ると良いでしょう:

    Note: AUR にある nvidia-allAUR パッケージによってカスタムカーネルや複数のカーネルと一緒に簡単に使えるようにすることもできます。

    以下の内容は、ABS を使用して NVIDIA ドライバのカスタムパッケージを作るための簡単なチュートリアルです:

    公式リポジトリから absインストールしてツリーを生成します:

    # abs
    

    一般ユーザーで、新しいパッケージを作るための一時ディレクトリを作成します:

    $ mkdir -p ~/abs
    

    nvidia パッケージのディレクトリのコピーを作成します:

    $ cp -r /var/abs/extra/nvidia/ ~/abs/
    

    nvidia のビルドをする一時ディレクトリの中へ移動します:

    $ cd ~/abs/nvidia
    

    nvidia.installPKGBUILD が正しいカーネルバージョン変数を含むように、それらのファイルを編集する必要があります。

    カスタムカーネルを実行している間に、適切なカーネルとローカルバージョン名を得ましょう:

    $ uname -r
    
    1. nvidia.install 内の、EXTRAMODULES='extramodules-3.4-ARCH' 変数をカスタムカーネルのバージョンで置き換えます。例えば、EXTRAMODULES='extramodules-3.4.4'EXTRAMODULES='extramodules-3.4.4-custom' のようにしてください。これの値はカーネルのバージョンやローカルバージョンのテキスト・数字によります。このファイルの中のすべてのバージョン番号に対して置換を行ってください。
    2. PKGBUILD にある、_extramodules=extramodules-3.4-ARCH 変数を変更して、上記と同じように、適切なバージョンに直して下さい。
    3. 複数のカーネルを並列に (デフォルトの -ARCH カーネルとカスタムカーネルが並ぶように) システムにインストールしているのならば、PKGBUILD の pkgname=nvidia 変数を一意な識別子、例えば nvidia-2622 や nvidia-custom へと変更します。こうすることで、カスタム nvidia モジュールは異なるパッケージ名となり、オリジナルのパッケージを上書きしないようになるので、両方のカーネルが NVIDIA モジュールを使えるようにできます。また、package() 内にある /usr/lib/modprobe.d/nvidia.conf の nouveau モジュールをブラックリスト化する行をコメントアウトしてください (二度する必要はありません)。

    それから次を実行します:

    $ makepkg -ci
    

    -c オプションは、パッケージのビルドが終わった後にファイルを片付けるように makepkg に指示します。-i は makepkg が自動で pacman を起動して作成したパッケージをインストールするように指示します。

    カーネルの更新時に NVIDIA モジュールを自動的にリコンパイルする

    これは AURnvidia-hookAUR によって可能です。モジュールのソースをインストールする必要があります: nvidia-dkmsAURnvidia-hook で、linux-headers パッケージの更新が終わった後に mkinitcpionvidia hook によって'自動リコンパイル'が行われます。/etc/mkinitcpio.conf の HOOKS に 'nvidia' を追加してください。

    このフックは dkms コマンドを呼び出して新しいカーネルのバージョンにあわせて NVIDIA モジュールを更新します。

    Note:
    • この機能を使う場合は linux (もしくは他のカーネルの) パッケージのインストールプロセスをよく見ることが重要です。nvidia hook は何か問題が発生したときにメッセージを表示します。
    • これを手動でやりたい場合は arch wiki の dkms のセクションを見て下さい。

    設定

    おそらく、ドライバをインストールした後に Xorg サーバーの設定ファイルを作成する必要はありません。テストを実行することで Xorg サーバーが設定ファイルなしで問題なく機能するかどうか調べられます。しかし、様々な設定を調節するために設定ファイル (/etc/X11/xorg.conf よりも /etc/X11/xorg.conf.d/20-nvidia.conf が好ましい) を作成することが要求されるかもしれません。この設定は NVIDIA の Xorg 設定ツールで生成することが可能で、あるいは、手動でも作成できます。手動で作成する場合、(Xorg サーバーに基本的なオプションだけを与えるという) 最小設定だけを行うこともできますし、逆に、Xorg によって自動検知されたり事前に設定されたオプションを無視する設定を多数含めてもかまいません。

    Note: 1.8.x から Xorg は /etc/X11/xorg.conf.d/ 下の設定ファイルを使うようになっています - 高度な設定のセクションを確認してください。

    最小設定

    20-nvidia.conf (もしくは廃止された xorg.conf) の基本的な設定ブロックは以下のようになります:

    /etc/X11/xorg.conf.d/20-nvidia.conf
    Section "Device"
            Identifier "Nvidia Card"
            Driver "nvidia"
            VendorName "NVIDIA Corporation"
            Option "NoLogo" "true"
            #Option "UseEDID" "false"
            #Option "ConnectedMonitor" "DFP"
            # ...
    EndSection
    
    Tip: nouveau からアップグレードする場合は /etc/mkinitcpio.conf から "nouveau" を削除してください。オープン・プロプライエタリのドライバを頻繁に切り替えるのならば、NVIDIA と nouveau ドライバーの切り替えを見て下さい。

    自動設定

    NVIDIA のパッケージには Xorg サーバーの設定ファイル (xorg.conf) を作成するための自動設定ツールが含まれています。次のコマンドで実行できます:

    # nvidia-xconfig
    

    このコマンドは現在のハードウェアを自動検知して /etc/X11/xorg.conf の設定を作成(既に存在する場合は編集)します。

    DRI のインスタンスがある場合は、コメントアウトされているか確認してください:

    #    Load        "dri"
    

    デフォルトの depth, horizontal sync, vertical refresh, resolutions が問題ないか /etc/X11/xorg.conf を再確認してください。

    Warning: Xorg-server 1.8 ではまだ正しく動作しない可能性があります。

    マルチモニター

    See Multihead for more general information
    Warning: As of August 2013, Xinerama is broken when using the proprietary NVIDIA driver from 319 upwards. Users wishing to use Xinerama with the NVIDIA driver should use the NVIDIA 313 driver, which works only with Linux kernels earlier than 3.10. See this thread for more information.

    To activate dual screen support, you just need to edit the /etc/X11/xorg.conf.d/10-monitor.conf file which you made before.

    Per each physical monitor, add one Monitor, Device, and Screen Section entry, and then a ServerLayout section to manage it. Be advised that when Xinerama is enabled, the NVIDIA proprietary driver automatically disables compositing. If you desire compositing, you should comment out the Xinerama line in "ServerLayout" and use TwinView (see below) instead.

    /etc/X11/xorg.conf.d/10-monitor.conf
    Section "ServerLayout"
        Identifier     "DualSreen"
        Screen       0 "Screen0"
        Screen       1 "Screen1" RightOf "Screen0" #Screen1 at the right of Screen0
        Option         "Xinerama" "1" #To move windows between screens
    EndSection
    
    Section "Monitor"
        Identifier     "Monitor0"
        Option         "Enable" "true"
    EndSection
    
    Section "Monitor"
        Identifier     "Monitor1"
        Option         "Enable" "true"
    EndSection
    
    Section "Device"
        Identifier     "Device0"
        Driver         "nvidia"
        Screen         0
    EndSection
    
    Section "Device"
        Identifier     "Device1"
        Driver         "nvidia"
        Screen         1
    EndSection
    
    Section "Screen"
        Identifier     "Screen0"
        Device         "Device0"
        Monitor        "Monitor0"
        DefaultDepth    24
        Option         "TwinView" "0"
        SubSection "Display"
            Depth          24
            Modes          "1280x800_75.00"
        EndSubSection
    EndSection
    
    Section "Screen"
        Identifier     "Screen1"
        Device         "Device1"
        Monitor        "Monitor1"
        DefaultDepth   24
        Option         "TwinView" "0"
        SubSection "Display"
            Depth          24
        EndSubSection
    EndSection
    

    TwinView

    You want only one big screen instead of two. Set the TwinView argument to 1. This option should be used instead of Xinerama (see above), if you desire compositing.

    Option "TwinView" "1"
    

    TwinView only works on a per card basis: If you have multiple cards, you'll have to use xinerama or zaphod mode (multiple X screens). You can combine TwinView with zaphod mode, ending up, for example, with two X screens covering two monitors each. Most window managers fail miserably in zaphod mode. Awesome is the shining exception, and KDE almost works.

    Example configuration:

    /etc/X11/xorg.conf.d/10-monitor.conf
    Section "ServerLayout"
        Identifier     "TwinLayout"
        Screen         0 "metaScreen" 0 0
    EndSection
    
    Section "Monitor"
        Identifier     "Monitor0"
        Option         "Enable" "true"
    EndSection
    
    Section "Monitor"
        Identifier     "Monitor1"
        Option         "Enable" "true"
    EndSection
    
    Section "Device"
        Identifier     "Card0"
        Driver         "nvidia"
        VendorName     "NVIDIA Corporation"
    
        #refer to the link below for more information on each of the following options.
        Option         "HorizSync"          "DFP-0: 28-33; DFP-1 28-33"
        Option         "VertRefresh"        "DFP-0: 43-73; DFP-1 43-73"
        Option         "MetaModes"          "1920x1080, 1920x1080"
        Option         "ConnectedMonitor"   "DFP-0, DFP-1"
        Option         "MetaModeOrientation" "DFP-1 LeftOf DFP-0"
    EndSection
    
    Section "Screen"
        Identifier     "metaScreen"
        Device         "Card0"
        Monitor        "Monitor0"
        DefaultDepth    24
        Option         "TwinView" "True"
        SubSection "Display"
            Modes          "1920x1080"
        EndSubSection
    EndSection
    

    Device option information.

    If you have multiple cards that are SLI capable, it is possible to run more than one monitor attached to separate cards (for example: two cards in SLI with one monitor attached to each). The "MetaModes" option in conjunction with SLI Mosaic mode enables this. Below is a configuration which works for the aforementioned example and runs GNOME flawlessly.

    /etc/X11/xorg.conf.d/10-monitor.conf
    Section "Device"
            Identifier      "Card A"
            Driver          "nvidia"
            BusID           "PCI:1:00:0"
    EndSection
    
    Section "Device"
            Identifier      "Card B"
            Driver          "nvidia"
            BusID           "PCI:2:00:0"
    EndSection
    
    Section "Monitor"
            Identifier      "Right Monitor"
    EndSection
    
    Section "Monitor"
            Identifier      "Left Monitor"
    EndSection
    
    Section "Screen"
            Identifier      "Right Screen"
            Device          "Card A"
            Monitor         "Right Monitor"
            DefaultDepth    24
            Option          "SLI" "Mosaic"
            Option          "Stereo" "0"
            Option          "BaseMosaic" "True"
            Option          "MetaModes" "GPU-0.DFP-0: 1920x1200+4480+0, GPU-1.DFP-0:1920x1200+0+0"
            SubSection      "Display"
                            Depth           24
            EndSubSection
    EndSection
    
    Section "Screen"
            Identifier      "Left Screen"
            Device          "Card B"
            Monitor         "Left Monitor"
            DefaultDepth    24
            Option          "SLI" "Mosaic"
            Option          "Stereo" "0"
            Option          "BaseMosaic" "True"
            Option          "MetaModes" "GPU-0.DFP-0: 1920x1200+4480+0, GPU-1.DFP-0:1920x1200+0+0"
            SubSection      "Display"
                            Depth           24
            EndSubSection
    EndSection
    
    Section "ServerLayout"
            Identifier      "Default"
            Screen 0        "Right Screen" 0 0
            Option          "Xinerama" "0"
    EndSection
    xrandr による手動の CLI 設定

    If the latest solutions doesn't works for you, you can use the autostart trick of your window manager to run a xrandr command like this one :

    xrandr --output DVI-I-0 --auto --primary --left-of DVI-I-1
    

    or:

    xrandr --output DVI-I-1 --pos 1440x0 --mode 1440x900 --rate 75.0
    

    When:

    • --output is used to indicate to which "monitor" set the options.
    • DVI-I-1 is the name of the second monitor.
    • --pos is the position of the second monitor respect to the first.
    • --mode is the resolution of the second monitor.
    • --rate is the Hz refresh rate.

    You must adapt the xrandr options with the help of the output of the command xrandr run alone in a terminal.

    NVIDIA Settings を使う

    You can also use the nvidia-settings tool provided by nvidia-utils. With this method, you will use the proprietary software NVIDIA provides with their drivers. Simply run nvidia-settings as root, then configure as you wish, and then save the configuration to /etc/X11/xorg.conf.d/10-monitor.conf.

    ConnectedMonitor

    ドライバーがセカンドモニタを正しく認識しない場合は、ConnectedMonitor を使って認識するように強制できます。

    /etc/X11/xorg.conf
    
    Section "Monitor"
        Identifier     "Monitor1"
        VendorName     "Panasonic"
        ModelName      "Panasonic MICRON 2100Ex"
        HorizSync       30.0 - 121.0 # this monitor has incorrect EDID, hence Option "UseEDIDFreqs" "false"
        VertRefresh     50.0 - 160.0
        Option         "DPMS"
    EndSection
    
    Section "Monitor"
        Identifier     "Monitor2"
        VendorName     "Gateway"
        ModelName      "GatewayVX1120"
        HorizSync       30.0 - 121.0
        VertRefresh     50.0 - 160.0
        Option         "DPMS"
    EndSection
    
    Section "Device"
        Identifier     "Device1"
        Driver         "nvidia"
        Option         "NoLogo"
        Option         "UseEDIDFreqs" "false"
        Option         "ConnectedMonitor" "CRT,CRT"
        VendorName     "NVIDIA Corporation"
        BoardName      "GeForce 6200 LE"
        BusID          "PCI:3:0:0"
        Screen          0
    EndSection
    
    Section "Device"
        Identifier     "Device2"
        Driver         "nvidia"
        Option         "NoLogo"
        Option         "UseEDIDFreqs" "false"
        Option         "ConnectedMonitor" "CRT,CRT"
        VendorName     "NVIDIA Corporation"
        BoardName      "GeForce 6200 LE"
        BusID          "PCI:3:0:0"
        Screen          1
    EndSection
    
    

    The duplicated device with Screen is how you get X to use two monitors on one card without TwinView. Note that nvidia-settings will strip out any ConnectedMonitor options you have added.

    Mosaic モード

    Mosaic mode is the only way to use more than 2 monitors across multiple graphics cards with compositing. Your window manager may or may not recognize the distinction between each monitor.

    ベースモザイク

    Base mosaic mode works on any set of Geforce 8000 series or higher GPUs. It cannot be enabled from withing the nvidia-setting GUI. You must either use the nvidia-xconfig command line program or edit xorg.conf by hand. Metamodes must be specified. The following is an example for four DFPs in a 2x2 configuration, each running at 1920x1024, with two DFPs connected to two cards:

    $ nvidia-xconfig --base-mosaic --metamodes="GPU-0.DFP-0: 1920x1024+0+0, GPU-0.DFP-1: 1920x1024+1920+0, GPU-1.DFP-0: 1920x1024+0+1024, GPU-1.DFP-1: 1920x1024+1920+1024"
    
    SLI モザイク

    If you have an SLI configuration and each GPU is a Quadro FX 5800, Quadro Fermi or newer then you can use SLI Mosaic mode. It can be enabled from within the nvidia-settings GUI or from the command line with:

    $ nvidia-xconfig --sli=Mosaic --metamodes="GPU-0.DFP-0: 1920x1024+0+0, GPU-0.DFP-1: 1920x1024+1920+0, GPU-1.DFP-0: 1920x1024+0+1024, GPU-1.DFP-1: 1920x1024+1920+1024"
    

    調整

    GUI: nvidia-settings

    NVIDIA パッケージには nvidia-settings プログラムが含まれており複数の設定の調整を行うことが可能です。

    ログイン時に設定をロードするには、ターミナルから次のコマンドを実行してください:

    $ nvidia-settings --load-config-only
    

    デスクトップ環境の auto-startup を使う方法ではおそらく nvidia-settings を正しくロードできません (KDE)。設定を確実にロードするには ~/.xinitrc ファイル (存在しないときは作成してください) に上のコマンドを記述してください。

    Firefox のようなピクセルマップを使うアプリケーションでの 2D グラフィックのパフォーマンスを著しく上げるには、InitialPixmapPlacement パラメータを2に設定してください:

    $ nvidia-settings -a InitialPixmapPlacement=2
    

    これは nvidia-settings のソースコード の中に書かれています。この設定を永続させるには、起動毎に上記コマンドを実行する必要があります。~/.xinitrc ファイルに追記することで X で自動実行できます。

    Tip: On rare occasions the ~/.nvidia-settings-rc may become corrupt. If this happens, the Xorg server may crash and the file will have to be deleted to fix the problem.

    高度な設定: 20-nvidia.conf

    /etc/X11/xorg.conf.d/20-nvidia.conf を編集して、適切なセクションにオプションを追加します。変更を適用するには Xorg サーバーを再起動する必要があります。

    詳しい説明やオプションは NVIDIA Accelerated Linux Graphics Driver README and Installation Guide を見て下さい。

    デスクトップのコンポジットを有効にする

    NVIDIA ドライバーのバージョン 180.44 から、Damage and Composite X 拡張で GLX のサポートがデフォルトで有効になっています。詳しい説明は Xorg のページを参照してください。

    起動時のロゴを無効にする

    Device セクションの下に "NoLogo" オプションを追加してください:

    Option "NoLogo" "1"
    

    ハードウェアアクセラレーションを有効にする

    Note: ドライバーバージョン 97.46.xx から RenderAccel はデフォルトで有効になっています。

    Device セクションの下に "RenderAccel" オプションを追加してください:

    Option "RenderAccel" "1"
    

    モニターの検出を上書きする

    The "ConnectedMonitor" option under section Device allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: "CRT" for analog connections, "DFP" for digital monitors and "TV" for televisions.

    The following statement forces the NVIDIA driver to bypass startup checks and recognize the monitor as DFP:

    Option "ConnectedMonitor" "DFP"
    
    Note: Use "CRT" for all analog 15 pin VGA connections, even if the display is a flat panel. "DFP" is intended for DVI digital connections only.

    トリプルバッファリングを有効にする

    トリプルバッファリングの使用を有効にするには Device セクションの下に "TripleBuffer" オプションを追加してください:

    Option "TripleBuffer" "1"
    

    このオプションを使うのはグラフィックカードに ram が多く載っているとき (128MB 以上) にしてください。この設定が適用されるのは nvidia-settings にあるオプションの一つである、vblank の同期が有効になっているときだけです。

    Note: This option may introduce full-screen tearing and reduce performance. As of the R300 drivers, vblank is enabled by default.

    OS レベルのイベントを使う

    NVIDIA ドライバーの README ファイルより: "[...] Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited." It may help improving performance, but it is currently incompatible with SLI and Multi-GPU modes.

    Device セクションの下に追加してください:

    Option "DamageEvents" "1"
    
    Note: This option is enabled by default in newer driver versions.

    省電力機能を有効にする

    Monitor セクションの下に追加してください:

    Option "DPMS" "1"
    

    輝度調整を有効にする

    Device セクションの下に追加してください:

    Option "RegistryDwords" "EnableBrightnessControl=1"
    
    Note: If you already have this enabled and your brightness control doesn't work try to comment it out.

    SLI を有効にする

    Warning: As of May 7, 2011, you may experience sluggish video performance in GNOME 3 after enabling SLI.

    Taken from the NVIDIA driver's README appendix: This option controls the configuration of SLI rendering in supported configurations. A "supported configuration" is a computer equipped with an SLI-Certified Motherboard and 2 or 3 SLI-Certified GeForce GPUs. See NVIDIA's SLI Zone for more information.

    Find the first GPU's PCI Bus ID using lspci:

    $ lspci | grep VGA
    03:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 8800 GTS 512] (rev a2)
    05:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 8800 GTS 512] (rev a2)
    

    Add the BusID (3 in the previous example) under section Device:

    BusID "PCI:3:0:0"
    
    Note: The format is important. The BusID value must be specified as "PCI:<BusID>:0:0"

    Add the desired SLI rendering mode value under section Screen:

    Option "SLI" "AA"
    

    The following table presents the available rendering modes.

    Value Behavior
    0, no, off, false, Single Use only a single GPU when rendering.
    1, yes, on, true, Auto Enable SLI and allow the driver to automatically select the appropriate rendering mode.
    AFR Enable SLI and use the alternate frame rendering mode.
    SFR Enable SLI and use the split frame rendering mode.
    AA Enable SLI and use SLI antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.

    Alternatively, you can use the nvidia-xconfig utility to insert these changes into xorg.conf with a single command:

    # nvidia-xconfig --busid=PCI:3:0:0 --sli=AA
    

    To verify that SLI mode is enabled from a shell:

    $ nvidia-settings -q all | grep SLIMode
      Attribute 'SLIMode' (arch:0.0): AA 
        'SLIMode' is a string attribute.
        'SLIMode' is a read-only attribute.
        'SLIMode' can use the following target types: X Screen.
    
    Warning: After enabling SLI your system may become frozen/non-responsive upon starting xorg. It is advisable that you disable your display manager before restarting.

    Powermizer のパフォーマンスレベルを強制する (ラップトップ)

    Device セクションの下に追加してください:

    # Force Powermizer to a certain level at all times
    # level 0x0=adaptiv (Driver Default)
    # level 0x1=highest
    # level 0x2=med
    # level 0x3=lowest
    
    # AC settings:
    Option "RegistryDwords" "PowerMizerLevelAC=0x3"
    # Battery settings:
    Option	"RegistryDwords" "PowerMizerLevel=0x3"
    
    # (Optional) AC Power adaptiv Mode and Battery Power forced to lowest Mode:
    Option "RegistryDwords" "PowerMizerLevelAC=0x0; PowerMizerLevel=0x3"
    
    GPU のパフォーマンスレベルを温度によって設定する

    Device セクションの下に追加してください:

    Option "RegistryDwords" "PerfLevelSrc=0x3333"
    

    vblank 割り込みを無効にする (ラップトップ)

    When running the interrupt detection utility powertop, it can be observed that the Nvidia driver will generate an interrupt for every vblank. To disable, place in the Device section:

    Option "OnDemandVBlankInterrupts" "1"
    

    This will reduce interrupts to about one or two per second.

    オーバークロックを有効にする

    Warning: Please note that overclocking may damage hardware and that no responsibility may be placed on the authors of this page due to any damage to any information technology equipment from operating products out of specifications set by the manufacturer.

    To enable GPU and memory overclocking, place the following line in the Device section:

    Option "Coolbits" "1"
    

    This will enable on-the-fly overclocking within an X session by running:

    $ nvidia-settings
    
    Note: GeForce 400/500/600/700 series Fermi/Kepler cores cannot currently be overclocked using the Coolbits method. The alternative is to edit and reflash the GPU BIOS either under DOS (preferred), or within a Win32 environment by way of nvflashTemplate:Linkrot and NiBiTor 6.0Template:Linkrot. The advantage of BIOS flashing is that not only can voltage limits be raised, but stability is generally improved over software overclocking methods such as Coolbits.
    静的な 2D/3D クロックを設定する

    Set the following string in the Device section to enable PowerMizer at its maximum performance level:

    Option "RegistryDwords" "PerfLevelSrc=0x2222"
    

    Set one of the following two strings in the Device section to enable manual GPU fan control within nvidia-settings:

    Option "Coolbits" "4"
    
    Option "Coolbits" "5"
    

    Tips and tricks

    ターミナルの解像度を修正する

    Transitioning from nouveau may cause your startup terminal to display at a lower resolution. A possible solution (if you are using GRUB) is to edit the GRUB_GFXMODE line of /etc/default/grub with desired display resolutions. Multiple resolutions can be specified, including the default auto, so it is recommended that you edit the line to resemble GRUB_GFXMODE=<desired resolution>,<fallback such as 1024x768>,auto. See http://www.gnu.org/software/grub/manual/html_node/gfxmode.html#gfxmode for more information.

    Pure Video HD を有効にする (VDPAU/VAAPI)

    必要なハードウェア:

    第2世代の PureVideo HD が載っているビデオカード [1]

    必要なソフトウェア:

    Nvidia のビデオカードとプロプライエタリのドライバーで PureVideo の世代にあった異なるレベルの VDPAU インターフェイスのビデオデコード機能を提供します。

    また、libva-vdpau-driver を使って VA-API インターフェイスをサポートすることもできます。

    VA-API のサポートを確認するには:

    $ vainfo
    

    あなたのビデオカードのハードウェアデコード機能を全て使うには、VDPAU や VA-API をサポートするメディアプレーヤーが必要です。

    MPlayer でハードウェアアクセラレーションを有効にするには ~/.mplayer/config を編集してください:

    vo=vdpau
    vc=ffmpeg12vdpau,ffwmv3vdpau,ffvc1vdpau,ffh264vdpau,ffodivxvdpau,
    
    Warning: The ffodivxvdpau codec is only supported by the most recent series of NVIDIA hardware. Consider omitting it based on your specific hardware.

    To enable hardware acceleration in VLC go:

    Tools > Preferences > Input & Codecs, then check Use GPU accelerated decoding.

    To enable hardware acceleration in smplayer go:

    Options > Preferences > General > Video Tab, then select vdpau as output driver

    To enable hardware acceleration in gnome-mplayer go:

    Edit > Preference, then set video output to vdpau

    Playing HD movies on cards with low memory:

    If your graphic card does not have a lot of memory (>512MB?), you can experience glitches when watching 1080p or even 720p movies. To avoid that start simple window manager like TWM or MWM.

    Additionally increasing the MPlayer's cache size in ~/.mplayer/config can help, when your hard drive is spinning down when watching HD movies.

    XvMC によるビデオデコードのハードウェアアクセラレーション

    Accelerated decoding of MPEG-1 and MPEG-2 videos via XvMC are supported on GeForce4, GeForce 5 FX, GeForce 6 and GeForce 7 series cards. To use it, create a new file /etc/X11/XvMCConfig with the following content:

    libXvMCNVIDIA_dynamic.so.1
    

    See how to configure supported software.

    TV 出力を使う

    A good article on the subject can be found here.

    X with a TV (DFP) as the only display

    The X server falls back to CRT-0 if no monitor is automatically detected. This can be a problem when using a DVI connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.

    To force NVIDIA to use DFP, store a copy of the EDID somewhere in the filesystem so that X can parse the file instead of reading EDID from the TV/DFP.

    To acquire the EDID, start nvidia-settings. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled "GPU-0" or similar), click the DFP section (again, DFP-0 or similar), click on the Acquire Edid Button and store it somewhere, for example, /etc/X11/dfp0.edid.

    Edit xorg.conf by adding to the Device section:

    Option "ConnectedMonitor" "DFP"
    Option "CustomEDID" "DFP-0:/etc/X11/dfp0.edid"
    

    The ConnectedMonitor option forces the driver to recognize the DFP as if it were connected. The CustomEDID provides EDID data for the device, meaning that it will start up just as if the TV/DFP was connected during X the process.

    This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.

    Check the power source

    The NVIDIA X.org driver can also be used to detect the GPU's current source of power. To see the current power source, check the 'GPUPowerSource' read-only parameter (0 - AC, 1 - battery):

    $ nvidia-settings -q GPUPowerSource -t
    1

    If you're seeing an error message similiar to the one below, then you either need to install acpid or start the systemd service via systemctl start acpid.service

    ACPI: failed to connect to the ACPI event daemon; the daemon
    may not be running or the "AcpidSocketPath" X
    configuration option may not be set correctly. When the
    ACPI event daemon is available, the NVIDIA X driver will
    try to use it to receive ACPI event notifications. For
    details, please see the "ConnectToAcpid" and
    "AcpidSocketPath" X configuration options in Appendix B: X
    Config Options in the README.
    

    (If you are not seeing this error, it is not necessary to install/run acpid soley for this purpose. My current power source is correctly reported without acpid even installed.)

    GPU の温度をシェルに表示する

    方法1 - nvidia-settings

    Note: This method requires that you are using X. Use Method 2 or Method 3 if you are not. Also note that Method 3 currently does not not work with newer NVIDIA cards such as the G210/220 as well as embedded GPUs such as the Zotac IONITX's 8800GS.

    GPU の温度をシェルに表示するには、以下のように nvidia-settings を使って下さい:

    $ nvidia-settings -q gpucoretemp
    

    このコマンドを使うと以下のように出力されます:

    Attribute 'GPUCoreTemp' (hostname:0.0): 41.
    'GPUCoreTemp' is an integer attribute.
    'GPUCoreTemp' is a read-only attribute.
    'GPUCoreTemp' can use the following target types: X Screen, GPU.
    

    このボードの GPU 温度は 41 C になります。

    rrdtoolconky などのユーティリティで使うために温度だけを表示したいときは:

    $ nvidia-settings -q gpucoretemp -t
    41

    方法2 - nvidia-smi

    X を全く使わず GPU から直接温度を読み込むことができる nvidia-smi を使います。サーバーなど、マシンで X を動かしていないユーザーにはこちらが重要でしょう。GPU の温度をシェルに表示するには、以下のように nvidia-smi を使って下さい:

    $ nvidia-smi
    

    このコマンドを使うと以下のように出力されます:

    $ nvidia-smi
    Fri Jan  6 18:53:54 2012       
    +------------------------------------------------------+                       
    | NVIDIA-SMI 2.290.10   Driver Version: 290.10         |                       
    |-------------------------------+----------------------+----------------------+
    | Nb.  Name                     | Bus Id        Disp.  | Volatile ECC SB / DB |
    | Fan   Temp   Power Usage /Cap | Memory Usage         | GPU Util. Compute M. |
    |===============================+======================+======================|
    | 0.  GeForce 8500 GT           | 0000:01:00.0  N/A    |       N/A        N/A |
    |  30%   62 C  N/A   N/A /  N/A |  17%   42MB /  255MB |  N/A      Default    |
    |-------------------------------+----------------------+----------------------|
    | Compute processes:                                               GPU Memory |
    |  GPU  PID     Process name                                       Usage      |
    |=============================================================================|
    |  0.           ERROR: Not Supported                                          |
    +-----------------------------------------------------------------------------+
    
    

    温度だけ見るには:

    $ nvidia-smi -q -d TEMPERATURE
    
    ==============NVSMI LOG==============
    
    Timestamp                       : Fri Jan  6 18:50:57 2012
    
    Driver Version                  : 290.10
    
    Attached GPUs                   : 1
    
    GPU 0000:01:00.0
        Temperature
            Gpu                     : 62 C
    
    
    

    rrdtoolconky などのユーティリティで使うために温度だけを取得したいときは:

    $ nvidia-smi -q -d TEMPERATURE | grep Gpu | cut -c35-36
    62

    参照: http://www.question-defense.com/2010/03/22/gpu-linux-shell-temp-get-nvidia-gpu-temperatures-via-linux-cli

    方法3 - nvclock

    AUR から利用できる nvclockAUR を使います。

    Note: nvclock は G210/220 などの新しい NVIDIA カードの温度センサーにはアクセスできません。

    たまに nvclock と nvidia-settings/nv-control が報告する温度が食い違うことがあります。nvclock の作者 (thunderbird) による この投稿 によると、nvclock の値のほうが正確なようです。

    ログイン時にファンの速度を設定する

    You can adjust the fan speed on your graphics card with nvidia-settings&#39s console interface. First ensure that your Xorg configuration sets the Coolbits option to 4 or 5 in your Device section to enable fan control.

    Option "Coolbits" "4"
    
    Note: GTX 4xx/5xx series cards cannot currently set fan speeds at login using this method. This method only allows for the setting of fan speeds within the current X session by way of nvidia-settings.

    Place the following line in your ~/.xinitrc file to adjust the fan when you launch Xorg. Replace n with the fan speed percentage you want to set.

    nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUCurrentFanSpeed=n"
    

    You can also configure a second GPU by incrementing the GPU and fan number.

    nvidia-settings -a "[gpu:0]/GPUFanControlState=1" \ 
    -a "[gpu:1]/GPUFanControlState=1" \
    -a "[fan:0]/GPUCurrentFanSpeed=n" \
    -a  [fan:1]/GPUCurrentFanSpeed=n" &
    

    If you use a login manager such as GDM or KDM, you can create a desktop entry file to process this setting. Create ~/.config/autostart/nvidia-fan-speed.desktop and place this text inside it. Again, change n to the speed percentage you want.

    [Desktop Entry]
    Type=Application
    Exec=nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUCurrentFanSpeed=n"
    X-GNOME-Autostart-enabled=true
    Name=nvidia-fan-speed
    

    Order of install/deinstall for changing drivers

    Where the old driver is nvidiaO and the new driver is nvidiaN.

    remove nvidiaO
    install nvidia-libglN
    install nvidiaN
    install lib32-nvidia-libgl-N (if required)
    

    NVIDIA と nouveau ドライバーを切り替える

    Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

    Reason: Fresh installs do not contain /etc/modprobe.d/modprobe.conf by default. The sed lines may not be needed. (Discuss in Talk:NVIDIA (日本語)#)

    If you are switching between the NVIDIA and nouveau driver often, you can use these two scripts to make it easier (both need to be ran as root):

     #!/bin/bash
     # nouveau -> nvidia
     
     set -e
     
     # check if root
     if [[ $EUID -ne 0 ]]; then
        echo "You must be root to run this script. Aborting...";
        exit 1;
     fi
     
     sed -i 's/MODULES="nouveau"/#MODULES="nouveau"/' /etc/mkinitcpio.conf
     
     pacman -Rdds --noconfirm nouveau-dri xf86-video-nouveau mesa-libgl #lib32-nouveau-dri lib32-mesa-libgl
     pacman -S --noconfirm nvidia #lib32-nvidia-libgl
     
     mkinitcpio -p linux
    
     #!/bin/bash
     # nvidia -> nouveau
     
     set -e
     
     # check if root
     if [[ $EUID -ne 0 ]]; then
        echo "You must be root to run this script. Aborting...";
        exit 1;
     fi
     
     sed -i 's/#*MODULES="nouveau"/MODULES="nouveau"/' /etc/mkinitcpio.conf
     
     pacman -Rdds --noconfirm nvidia #lib32-nvidia-libgl
     pacman -S --noconfirm nouveau-dri xf86-video-nouveau #lib32-nouveau-dri
     
     mkinitcpio -p linux
    


    A reboot is needed to complete the switch.

    Adjust the scripts accordingly, if using other NVIDIA drivers (e.g. nvidia-173xx).

    Uncomment the lib32 packages if you run a 64-bit system and require the 32-bit libraries (e.g. 32-bit games/Steam).

    トラブルシューティング

    Bad performance, e.g. slow repaints when switching tabs in Chrome

    Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

    Reason: This bug is most likely resolved. See the this bug report (Discuss in Talk:NVIDIA (日本語)#)

    On some machines, recent NVIDIA drivers introduce a bug(?) that causes X11 to redraw pixmaps really slow. Switching tabs in Chrome/Chromium (while having more than 2 tabs opened) takes 1-2 seconds, instead of a few milliseconds.

    It seems that setting the variable InitialPixmapPlacement to 0 solves that problem, although (like described some paragraphs above) InitialPixmapPlacement=2 should actually be the faster method.

    The variable can be (temporarily) set with the command

    $ nvidia-settings -a InitialPixmapPlacement=0
    

    To make this permanent, this call can be placed in a startup script.

    Gaming using Twinview

    In case you want to play fullscreen games when using Twinview, you will notice that games recognize the two screens as being one big screen. While this is technically correct (the virtual X screen really is the size of your screens combined), you probably do not want to play on both screens at the same time.

    To correct this behavior for SDL, try:

    export SDL_VIDEO_FULLSCREEN_HEAD=1
    

    For OpenGL, add the appropiate Metamodes to your xorg.conf in section Device and restart X:

    Option "Metamodes" "1680x1050,1680x1050; 1280x1024,1280x1024; 1680x1050,NULL; 1280x1024,NULL;"
    

    Another method that may either work alone or in conjunction with those mentioned above is starting games in a separate X server.

    Vertical sync using TwinView

    If you're using TwinView and vertical sync (the "Sync to VBlank" option in nvidia-settings), you will notice that only one screen is being properly synced, unless you have two identical monitors. Although nvidia-settings does offer an option to change which screen is being synced (the "Sync to this display device" option), this does not always work. A solution is to add the following environment variables at startup, for example append in /etc/profile:

    export __GL_SYNC_TO_VBLANK=1
    export __GL_SYNC_DISPLAY_DEVICE=DFP-0
    export __VDPAU_NVIDIA_SYNC_DISPLAY_DEVICE=DFP-0
    

    You can change DFP-0 with your preferred screen (DFP-0 is the DVI port and CRT-0 is the VGA port). You can find the identifier for your display from nvidia-settings in the "X Server XVideoSettings" section.

    Old Xorg settings

    If upgrading from an old installation, please remove old /usr/X11R6/ paths as it can cause trouble during installation.

    Corrupted screen: "Six screens" Problem

    For some users using Geforce GT 100M's, the screen turns out corrupted after X starts; divided into 6 sections with a resolution limited to 640x480. The same problem has been recently reported with Quadro 2000 and hi-res displays.

    To solve this problem, enable the Validation Mode NoTotalSizeCheck in section Device:

    Section "Device"
     ...
     Option "ModeValidation" "NoTotalSizeCheck"
     ...
    EndSection
    

    '/dev/nvidia0' input/output error

    Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

    Reason: Verify that the BIOS related suggestions work and are not coincidentally set while troubleshooting. (Discuss in Talk:NVIDIA (日本語)#'/dev/nvidia0' Input/Output error... suggested fixes)

    This error can occur for several different reasons, and the most common solution given for this error is to check for group/file permissions, which in almost every case is not the problem. The NVIDIA documentation does not talk in detail on what you should do to correct this problem but there are a few things that have worked for some people. The problem can be a IRQ conflict with another device or bad routing by either the kernel or your BIOS.

    First thing to try is to remove other video devices such as video capture cards and see if the problem goes away. If there are too many video processors on the same system it can lead into the kernel being unable to start them because of memory allocation problems with the video controller. In particular on systems with low video memory this can occur even if there is only one video processor. In such case you should find out the amount of your system's video memory (e.g. with lspci -v) and pass allocation parameters to the kernel, e.g.:

    vmalloc=64M
    or
    vmalloc=256M
    

    If running a 64bit kernel, a driver defect can cause the NVIDIA module to fail initializing when IOMMU is on. Turning it off in the BIOS has been confirmed to work for some users. [2]User:Clickthem#nvidia module

    Another thing to try is to change your BIOS IRQ routing from Operating system controlled to BIOS controlled or the other way around. The first one can be passed as a kernel parameter:

    PCI=biosirq
    

    The noacpi kernel parameter has also been suggested as a solution but since it disables ACPI completely it should be used with caution. Some hardware are easily damaged by overheating.

    Note: The kernel parameters can be passed either through the kernel command line or the bootloader configuration file. See your bootloader Wiki page for more information.

    '/dev/nvidiactl' errors

    Trying to start an opengl application might result in errors such as:

    Error: Could not open /dev/nvidiactl because the permissions are too
    restrictive. Please see the FREQUENTLY ASKED QUESTIONS 
    section of /usr/share/doc/NVIDIA_GLX-1.0/README 
    for steps to correct.
    

    Solve by adding the appropiate user to the video group and relogin:

    # gpasswd -a username video
    

    32 bit applications do not start

    Under 64 bit systems, installing lib32-nvidia-libgl that corresponds to the same version installed for the 64 bit driver fixes the problem.

    Errors after updating the kernel

    If a custom build of NVIDIA's module is used instead of the package from [extra], a recompile is required every time the kernel is updated. Rebooting is generally recommended after updating kernel and graphic drivers.

    Crashing in general

    • Try disabling RenderAccel in xorg.conf.
    • If Xorg outputs an error about "conflicting memory type" or "failed to allocate primary buffer: out of memory", add nopat at the end of the kernel line in /boot/grub/menu.lst.
    • If the NVIDIA compiler complains about different versions of GCC between the current one and the one used for compiling the kernel, add in /etc/profile:
    export IGNORE_CC_MISMATCH=1
    
    • If Xorg is crashing with a "Signal 11" while using nvidia-96xx drivers, try disabling PAT. Pass the argument nopat to kernel parameters.

    More information about troubleshooting the driver can be found in the NVIDIA forums.

    Bad performance after installing a new driver version

    If FPS have dropped in comparison with older drivers, first check if direct rendering is turned on:

    $ glxinfo | grep direct
    

    If the command prints:

    direct rendering: No
    

    then that could be an indication for the sudden FPS drop.

    A possible solution could be to regress to the previously installed driver version and rebooting afterwards.

    CPU spikes with 400 series cards

    If you are experiencing intermittent CPU spikes with a 400 series card, it may be caused by PowerMizer constantly changing the GPU's clock frequency. Switching PowerMizer's setting from Adaptive to Performance, add the following to the Device section of your Xorg configuration:

     Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x3322; PowerMizerDefaultAC=0x1"
    

    Laptops: X hangs on login/out, worked around with Ctrl+Alt+Backspace

    If while using the legacy NVIDIA drivers Xorg hangs on login and logout (particularly with an odd screen split into two black and white/gray pieces), but logging in is still possible via Ctrl-Alt-Backspace (or whatever the new "kill X" keybind is), try adding this in /etc/modprobe.d/modprobe.conf:

    options nvidia NVreg_Mobile=1
    

    One user had luck with this instead, but it makes performance drop significantly for others:

    options nvidia NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 NVreg_DeviceFileMode=0660 NVreg_SoftEDIDs=0 NVreg_Mobile=1
    

    Note that NVreg_Mobile needs to be changed according to the laptop:

    • 1 for Dell laptops.
    • 2 for non-Compal Toshiba laptops.
    • 3 for other laptops.
    • 4 for Compal Toshiba laptops.
    • 5 for Gateway laptops.

    See NVIDIA Driver's Readme:Appendix K for more information.

    Refresh rate not detected properly by XRandR dependant utilities

    The XRandR X extension is not presently aware of multiple display devices on a single X screen; it only sees the MetaMode bounding box, which may contain one or more actual modes. This means that if multiple MetaModes have the same bounding box, XRandR will not be able to distinguish between them.

    In order to support DynamicTwinView, the NVIDIA driver must make each MetaMode appear to be unique to XRandR. Presently, the NVIDIA driver accomplishes this by using the refresh rate as a unique identifier.

    Use $ nvidia-settings -q RefreshRate to query the actual refresh rate on each display device.

    The XRandR extension is currently being redesigned by the X.Org community, so the refresh rate workaround may be removed at some point in the future.

    This workaround can also be disabled by setting the DynamicTwinView X configuration option to false, which will disable NV-CONTROL support for manipulating MetaModes, but will cause the XRandR and XF86VidMode visible refresh rate to be accurate.

    No screens found on a laptop/NVIDIA Optimus

    On a laptop, if the NVIDIA driver cannot find any screens, you may have an NVIDIA Optimus setup : an Intel chipset connected to the screen and the video outputs, and a NVIDIA card that does all the hard work and writes to the chipset's video memory.

    Check if $ lspci | grep VGA outputs something similar to:

    00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02)
    01:00.0 VGA compatible controller: nVidia Corporation Device 0df4 (rev a1)
    

    NVIDIA drivers now offer Optimus support since 319.12 Beta [[3]] with kernels above and including 3.9.

    Another solution is to install the Intel driver to handle the screens, then if you want 3D software you should run them through Bumblebee to tell them to use the NVIDIA card.

    Possible Workaround

    Enter the BIOS and changed the default graphics setting from 'Optimus' to 'Discrete' and the install NVIDIA drivers (295.20-1 at time of writing) recognized the screens.

    Steps:

    1. Enter BIOS.
    2. Find Graphics Settings (should be in tab Config > Display).
    3. Change 'Graphics Device' to 'Discrete Graphics' (Disables Intel integrated graphics).
    4. Change OS Detection for Nvidia Optimus to "Disabled".
    5. Save and exit.

    Tested on a Lenovo W520 with a Quadro 1000M and Nvidia Optimus

    Screen(s) found, but none have a usable configuration

    On a laptop, sometimes NVIDIA driver cannot find the active screen. It may be caused because you own a graphic card with vga/tv outs. You should examine Xorg.0.log to see what is wrong.

    Another thing to try is adding invalid "ConnectedMonitor" Option to Section "Device" to force Xorg throws error and shows you how correct it. Here more about ConnectedMonitor setting.

    After re-run X see Xorg.0.log to get valid CRT-x,DFP-x,TV-x values.

    nvidia-xconfig --query-gpu-info could be helpful.

    No brightness control on laptops

    Try to add the following line on 20-nvidia.conf:

    Option "RegistryDwords" "EnableBrightnessControl=1"
    

    If it still not working, you can try install nvidia-bl or nvidiabl.

    Black Bars while watching full screen flash videos with TwinView

    Follow the instructions presented here: link.

    Backlight is not turning off in some occasions

    By default, DPMS should turn off backlight with the timeouts set or by running xset. However, probably due to a bug in the proprietary Nvidia drivers the result is a blank screen with no powersaving whatsoever. To workaround it, until the bug has been fixed you can use the vbetool as root.

    Install the vbetool package.

    Turn off your screen on demand and then by pressing a random key backlight turns on again:

    vbetool dpms off && read -n1; vbetool dpms on
    

    Alternatively, xrandr is able to disable and re-enable monitor outputs without requiring root.

    xrandr --output DP-1 --off; read -n1; xrandr --output DP-1 --auto
    

    Blue tint on videos with Flash

    A problem with flashplugin versions 11.2.202.228-1 and 11.2.202.233-1 causes it to send the U/V panes in the incorrect order resulting in a blue tint on certain videos. There are a few potential fixes for this bug:

    1. Install the latest libvdpau.
    2. Patch vdpau_trace.so with this makepkg.
    3. Right click on a video, select "Settings..." and uncheck "Enable hardware acceleration". Reload the page for it to take affect. Note that this disables GPU acceleration.
    4. Downgrade the flashplugin package to version 11.1.102.63-1 at most.
    5. Use google-chromeAUR with the new Pepper API chromium-pepper-flashAUR.
    6. Try one of the few Flash alternatives.

    The merits of each are discussed in this thread.

    Bleeding overlay with Flash

    This bug is due to the incorrect colour key being used by the flashplugin version 11.2.202.228-1 and causes the flash content to "leak" into other pages or solid black backgrounds. To avoid this problem simply install the latest libvdpau or export VDPAU_NVIDIA_NO_OVERLAY=1 within either your shell profile (E.g. ~/.bash_profile or ~/.zprofile) or ~/.xinitrc

    Full system freeze using Flash

    If you experience occasional full system freezes (only the mouse is moving) using flashplugin and get:

    /var/log/errors.log
    NVRM: Xid (0000:01:00): 31, Ch 00000007, engmask 00000120, intr 10000000
    

    A possible workaround is to switch off Hardware Acceleration in Flash, setting

    /etc/adobe/mms.cfg
    EnableLinuxHWVideoDecode=0

    Or, if you want to keep Hardware acceleration enabled, you may try to::

    export VDPAU_NVIDIA_NO_OVERLAY=1
    

    ...before starting the browser. Note that this may introduce tearing.

    XOrg fails to load or Red Screen of Death

    If you get a red screen and use GRUB disable the GRUB framebuffer by editing /etc/defaults/grub and uncomment GRUB_TERMINAL_OUTPUT. For more information see GRUB.

    Black screen on systems with Intel integrated GPU

    If you have an Intel CPU with an integrated GPU (e.g. Intel HD 4000) and get a black screen on boot after installing the nvidia package, this may be caused by a conflict between the graphics modules. This is solved by blacklisting the Intel GPU modules. Create the file /etc/modprobe.d/blacklist.conf and prevent the i915 and intel_agp modules from loading on boot:

    /etc/modprobe.d/blacklist.conf
    install i915 /usr/bin/false
    install intel_agp /usr/bin/false
    

    Black screen on systems with VIA integrated GPU

    As above, blacklisting the viafb module may resolve conflicts with NVIDIA drivers:

    /etc/modprobe.d/blacklist.conf
    install viafb /usr/bin/false
    

    X fails with "no screens found" with Intel iGPU

    Like above, if you have an Intel CPU with an integrated GPU and X fails to start with

    [ 76.633] (EE) No devices detected.
    [ 76.633] Fatal server error:
    [ 76.633] no screens found
    

    then you need to add your discrete card's BusID to your X configuration. Find it:

    # lspci | grep VGA
    00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09)
    01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX 650] (rev a1)
    

    then you fix it by adding it to the card's Device section in your X configuration. In my case:

    /etc/X11/xorg.conf.d/10-nvidia.conf
    Section "Device"
        Identifier     "Device0"
        Driver         "nvidia"
        VendorName     "NVIDIA Corporation"
        BusID          "PCI:1:0:0"
    EndSection
    

    Note how 01:00.0 is written as 1:0:0.

    参照