Nvidia

This article aims to be a comprehensive guide to the official Nvidia graphics card drivers. An official guide exists, but this article aims to be more comprehensive.

Different drivers
Before continuing with the article, it is important to provide you with some information.

Legacy vs. Current
The drivers for older "legacy" graphics cards used to be held in a separate package called nvidia-legacy-drivers. This is no longer the case and all drivers are now in the nvidia-drivers package. See "Installing the drivers" further down for selecting the correct driver.

Nouveau vs. nv vs. nVidia Drivers
There are several different groups developing drivers that support nVidia graphics cards on Linux. The X.Org team has an open source driver called 'nv' that you can install as part of, which has only basic support and does not support recent cards. There is also a reverse-engineered open source driver with experimental 3D support called nouveau, see this page if you wish to use it. The driver that is the focus of this article is the proprietary driver released by the fine folks at nVidia, called 'nvidia'. Only a small amount of code is given to be able to interface the kernel and the actual nvidia driver. These two projects have nothing to do with each other, and keeping them straight is important, especially when you start configuring X.Org near the end of this article. You only need one or the other - not both - and this article will help you install the one released by nVidia.

Preparing Your System
The official nVidia drivers are provided in 2 parts - the kernel module and the X.Org driver, both of which are held in the package. Therefore, you need to make sure your kernel is set up to support module loading and to provide access to Memory Type Range Registers (MTRR).

Uninstalling the Open Source Nouveau Driver
If switching to the NVidia proprietary drivers, it is best to clear the open source version off the system. Adjust the following kernel options:

If nouveaufb was in use for a console framebuffer, switch to the VESA framebuffer or a standard text console:

Remember to configure the VESA framebuffer prior to use or it will fallback to VGA text console if compiled. The screen will freeze otherwise.

Selecting the Right Kernel
Kernel module packages use the symlink to determine which kernel they should build against. If this link is already correct, move on to Required Kernel Settings.

Usually kernel modules should be built against the currently running kernel, so find out what that is by running:

Gentoo provides a handy tool for changing lots of settings on the system called eselect. One of eselect's modules is for changing the symlink.

List all available kernel source directories with:

Find out which one the symlink is currently pointing to with:

Now set the kernel source directory to point the symlink to with eselect kernel set  where  is the number next to the kernel you wish to set the symlink to point to. For example, if the symlink should point to the number 5 item in the list, run:

Required Kernel Settings
Make sure you have the following options enabled:

AGP support is optional, dependent on your type of graphics card:

Make sure you have the following options disabled. These options conflict with nVidia's driver:

If you need help configuring, building, and installing your new kernel, read the official Gentoo kernel guide.

Determining Your Card ID and Model
Use lspci to find out what card you have. Note the identifier of the target card you wish to enable support for. (Adding -v or -vv will increase verbosity.)

lspci

Using the -ns option followed by the identifier will allow you to see the actual ID # as it should appear on the List of Supported Devices.

lspci -ns 01:00.0

Installing the drivers
To install the driver, you need to install.

The nvidia-drivers package supports the full range of available nVidia cards. Multiple versions are available for installation, depending on the card(s) you have.


 * Newer cards such as the GeForce 8, 7 and 6 series should use the newer drivers from the 200.x series.
 * Older cards such as GeForce FX 5 series and related Quadro FX cards require the 173.x drivers. For these cards, you should mask >=x11-drivers/nvidia-drivers-180.00 in your /etc/portage/package.mask file.
 * Older cards such as the GeForce 3 or GeForce 4 and related Quadro4 and some Quadro2 series require the 96.x drivers. For these cards, you should mask >=x11-drivers/nvidia-drivers-97.00 in your /etc/portage/package.mask file. This will prevent newer versions of the driver which are incompatible with your card from being installed.
 * Old NV2x-based cards (such as TNT, TNT2, GeForce, GeForce 2, Quadro and Quadro2) require the older 71.x drivers (such as nvidia-drivers-71.86.01). For these cards, you should mask in.

See also the Nvidia Legacy GPU page.

Installing the Latest "Unstable" Release
If you want to run the latest "testing" release, i.e. the latest release from nVidia, you will need to unmask and install the newest package. Unmask the nvidia-drivers package by adding it to your file:

Beta Drivers
On some occasions, the very latest nVidia drivers will be masked by package.mask. This usually happens when those drivers are considered "beta" by nVidia - ie. even nVidia considers them unstable. To use these drivers, you'll need to add the drivers to both your and  files:

Switching to the New Driver
The X.Org configuration file needs to be updated so that X will use the new driver. To do this, edit your file find the 'Device' section where the graphics card is configured and replace the Driver entry with 'nvidia'. The following shows an example "before and after".

Activating GLX
To enable 3D acceleration, the GLX module needs to be activated and both the DRI and GLcore modules must be deactivated.

In the file:

Now, the module is enabled in configuration, but in order for it to work X must be running in either 16 or 24-bit color mode.

To do this set the DefaultDepth setting in the 'Screen' section of the xorg.conf as shown in the example below. Please note that there must be 16-bit and/or 24-bit modes in the "Display" subsection of the "Screen" section.

There are a number of different opengl libraries available and it's possible to install more than one. To manage this situation, Gentoo uses the eselect tool (as already shown earlier).

To tell Gentoo to use the nvidia opengl implementation, run:

Adding Users to the Video Group
To protect the system from malicious activities, linux restricts the users that can access a given piece of hardware. In the case of the video card (which needs to be accessed for 3D acceleration), users must be members of the 'video' group.

For each user you wish to allow to use 3D acceleration, run the following command, where is the name of the user you wish to add to the group:

(Re)starting X
Now X must be restarted with the new configuration. You'll want to read all of this section before carrying out any commands.

If you are logged in to X, first log out (usually by choosing "Log Out" or "quit" from the menu of your desktop environment.

If you're at the console, restart X with:

If you're at a graphical login screen, you still need to restart X. This can be achieved by pressing. This key combination kills the currently running X server. X is then restarted by the desktop manager (xdm, gdm or kdm).

Testing Your Configuration
To ensure that 3D acceleration is working, from a console running inside of X and as a normal user (not as root), run:

You should see that direct rendering is enabled, as shown in the example output below.

If you get the message "", install the mesa-progs package with:

glxinfo output example

Remove the nvidia splash screen
Normally when X starts with the nvidia drivers installed, a splash screen is shown. This can be removed by setting the NoLogo option to "true" as shown in the example below.

Activating NV30 Emulation for lower architectures
Unless you know what this does, you should not use it. It will not increase performance. It is possible (even if no longer advertised on nvidia page) to emulate NV30 architecture on older cards. For example, you can run FX pixel shaders on an NVIDIA GeForce 2 Go. This can be achieved by adding the option "NVEmulate" to the file, section that concerns the nvidia device: Enabling emulation of NV30 NVIDIA architecture

Maybe it is possible to use the value "40" instead of "30" in order to see if there is some difference... maybe it means that also NV40 architecture can be emulated. KEYWORDS for search engines (since it is difficult to find this info): __GL_NV30EMULATE, GL_NV30_EMULATE

Activating Coolbits; Overclocking Controls for nVIDIA Settings
There are many fine pages about where to begin with overclocking, you should read several and fully understand what you are doing before altering the settings for your card.

Beginning with version 1.0-7664, Coolbits, support for GPU clock manipulation, is included.

To activate Coolbits, open in a text editor and add the following line in Section "Device" :

Restart your X-server and nvidia-settings.

There will be a new item in the left column list of categories in nvidia settings; Clock Frequencies. Click on the "Enable Overclocking" checkbox and read and accept the license agreeement.

You can now set the frequencies yourself or use the auto detect feature to find "optimal" values. The overclock settings will not survive restarting X.

To fix this, add this line to your :

The first set of tags (GPU2DClockFreqs) is for 2D and the second set of tags (GPU3DClockFreqs) is for 3D. Substitute and with your desired GPU and memory clock frequencies, respectively. If you have a second graphic card in your system, add another line and change [gpu:0] to [gpu:1].

Manual Fan Control for nVIDIA Settings
Some combinations of nvidia cards and driver versions report that fan-speed is "variable", but do not actually ever change the fan speed regardless of temperature. If you experience an unreasonably hot GPU and nvidia-settings reports your fan speed as "Variable" but never leaves its assigned value, try the below.

It's probably a good idea to read about the CoolBits option before we begin. Take a look at the nvidia-settings manual (man nvidia-settings), and the nvidia-drivers manual, available at or http://us.download.nvidia.com/XFree86/Linux-x86/195.36.24/README/xconfigoptions.html (adjust the version in the URL as appropriate - be careful about looking at out-of-date documentation about the CoolBits option!)

If your card is described in multiple "Device" sections, put the above in each of them.

Inside X, run nvidia-settings. You should now find in the "Thermal Settings" section, "GPU Fan Settings" controls. My suggestion is to crank this up to 100.

You may also modify your fan speed from the command line;

Enable GPU fan control:

Find out the fan's resource id using:

Then set the speed using:

Where  is percentage of full speed.

These settings will not be permanent - to have them take effect every time that X is launched, add the below to your ~/.xinitrc

KDE 4 users will need to add a symlink to ~/.xinitrc in the Autostart directory since ~/.xinitrc isn't sourced by KDM:

If ~/.xinitrc is not being autostarted, then make sure your ~/.xinitrc has a shebang (#!/bin/sh) at the top.

Black/Blank screen when starting X
Symptoms: A black/blank screen when X starts, followed by the monitor going in standby mode after a moment. doesn't kill X and get you back to the console.

Problem: This issue is caused by bad refresh rate values given to X in your file.

Check the X.Org log file, usually to find what values are actually being used and where they're being obtained from.

In order to fix this, you will need to find the correct HorizSync and VertRefresh values for your monitor. Sources for possible values include: Establishing the connected monitor
 * Values that do work with vesa driver
 * Values given in the technical specs of your monitor (you should be able to find these in the manual, usually available on the manufacturers website).
 * Values from other linux installs (check the ).
 * You can also attempt searching online for others' configurations for your monitor (Search for your monitors model and "xorg.conf" or "modeline").
 * The "nvidia-xconfig" utility may generate correct values.
 * You may need to tweak the Devices section to indicate which monitor is connected:

Unable to validate video modes
If you have an older monitor with bad or no DDC/EDID information, nvidia-auto-select may fail to validate your perfectly good modelines that have worked for years, leaving you stranded with 1024x768, or worse. To fix this, add Metamodes to your monitor section like this: xorg.conf

This apparently tricks the nvidia driver into actually trying to find a good mode for your monitor, rather than just giving you a bad default.

Error: libnvidia-tls.so.1: cannot handle TLS data
After re-emerging several times the nvidia drivers, it may happen that the glx module fails to load without any apparent reason, with the error "libnvidia-tls.so.1: cannot handle TLS data". This issue is caused by two files being inverted, and. The fix is quite simple: swap the two files. Before trying this, check to see if the file in the no-tls folder is smaller than the one in the tls folder. If it is the case, then the files are already in the correct folder, so do not swap them. If it is not the case, then you can swap them with this command:

Restart X and the glx module should load fine, this time. If not, update your glibc!

UDev Users: Fix Device Creation Problem
Udev doesn't like nVidia... or maybe nVidia doesn't like Udev. Either way, you have to run 'NVmakedevices.sh' to build the character devices that allow your computer to access your card. Here's the rub. You'll probably have to run NVmakedevices.sh every time you boot up your computer. Which isn't difficult. Just do the following:

Ok, problem solved. Your local.start script will run NVmakedevices.sh during boot, before the computer switches to your default runlevel so you're safe to have your computer boot into GDM or whatever graphical login manager you choose.

dmesg or building the module returns unknown symbol errors
dmesg output gives something like this: dmesg

Possible solutions:
 * Disable ccache
 * Most likely the fault is ccache Forum thread Bug report. Disable it when rebuilding this module by issuing the command:
 * Another cause could be the version of your nvidia driver or of the kernel. Simply try another driver/kernel.
 * If the problem persists, you might try this:

Edit your kernel menuconfig and check:

This might help because pci_find_class is in the file.

When you attempt to load the kernel module you receive a "insmod: error inserting ... Invalid module format"
This type of error can be diagnosed by running: The output from this command will indicate the source of the problem.

Common reasons for this error include:
 * Using the wrong kernel preemption option.
 * Your kernel module was compiled with a different gcc version than your kernel. In this case simply re-emerge the kernel and nvidia kernel module.

Preemption
In this case you'll probably see a message like:

If the kernel is not compiled to be a preemptible kernel then trying to insert the nvidia.ko module will fail. To make the kernel preemptible use your favorite text editor to modify your kernel's configuration file ( by default) and be sure to ammend:

As well as commenting out all other CONFIG_PREEMPT type options.

X freezes when running glxinfo or OpenGL apps
It may happen that X.org freezes everytime you launch an OpenGL app, even glxinfo does the job. This is caused by the nvidia driver oopsing in the background, which you can watch by tail-ing over ssh. One reason for this is the nvidia drivers not liking a kernel with PaX (hardened-sources) or an enabled NX/XD-Bit (NoeXecute/eXecuteDisable) in your BIOS. If you have no special reason to have this enabled, disable it and nvidia will work - at the loss of some security. Otherwise there are several patches floating around the nvnews.net forums.

Whole system freezes on Logout or Switching to Console
A workaround for this problem is to disable the framebuffer console, by Compiling a Kernel without vesafb-tng, or (if you use standard vesafb) by not using "vga=" kernel parameters in your bootloader. 

50 Hz refresh rate
If the refresh rate is shown as 50hz and you know it shouldn't be, then disable DynamicTwinView in in the Device or Screen sections. This option is also described in xorg.conf

nvidia_drv.so: undefined symbol with xorg-server-1.5
does not support.

The following error is produced:

The solution is to wind back the clock to.

Blank Screen When Switching From X to Console
When you're using the Nvidia binary driver, it may at times, conflict with the default kernel (tty) console causing it to show blank. (ie. Using "chvt 1".) The console still works, it's just blank or not viewable. Blind typing will work.

If you really want a console, a work-around is to configure the kernel for a tty serial console. This requires a null serial (DB9) cable. Default is connecting it from COM1 (/dev/ttyS0) to COM1 on the other computer.

Then, configure boot kernel parameters. For example:

Emerge & Configure kermit on the remote computer:

As user, not root:

Start kermit as user and type "connect" and reboot your other computer with its nvidia driver.

This will get you a working dmesg output to the remote computer. For getting a login tty terminal for logging into:

This will restart init (without hopefully rebooting) and simply reload your configuration file.

Bingo! A console TTY terminal to go! The nice thing about this, you can plug/unplug the serial cable anytime -- leaving the exported terminal active. If you enjoy this, checkout KGDB. ;-)

(If you really want, you can also export the init startup info printed on console, but it's a one way deal. You won't see it on both monitors if you export it to the remote computer.  I find it unnecessary screen clutter.)