X.Org/nVidia Optimus

= Overview = This feature is advertised to boost both performance and battery life in laptops, but has no direct support from the Linux kernel as of version 2.6.39. Any attempts to configure an X.Org server with both cards and switch between them has failed. Setting up the nVidia card as the primary device will produce a clean server startup and a blank screen. On the other hand, Intel GMA works just fine out of the box. However, it is possible to setup two Xorg Servers and use both Intel GMA and nVidia cards at the same time.

= Basic Installation = First, drivers for both cards need to be installed

To have Hardware Acceleration enabled, the Intel driver should be setup as the primary OpenGL interface.

= Server Configuration = The primary server is responsible for rendering to the Monitor: /etc/X11/xorg.conf

The second server is responsible for the nVidia card, and needs a separate config file. /etc/X11/xorg.nvidia.conf

= Server Startup = The primary server can be started by default, as in a standard installation.

The secondary server needs to be started manually or via a special init script. It needs to load the nVidia driver and OpenGL libraries, so their location needs to be passed by command line, as the xorg-x11 implementation was previously set for the whole system ('-modulepath' option). Furthermore, it needs to continue listening even though all processes running on it have exited ('-noreset' option) - in fact there are no processes keeping it alive on startup, as xdm is running on the primary server, so it would terminate immediately after startup. To avoid conflict with the primary server, listening on the TCP socket should be disabled ('-nolisten tcp' option). One of the most important command line arguments comes last, the display number, as this is the second instance it should use display ':1'. Below is a complete command line.

And here is a sample init script for BaseLayout 2.x. /etc/init.d/optimus In fact, the depend section does not need to require xdm, but all its dependencies to function standalone. Still, as the nVidia card is not physically connected to a monitor, it needs the primary server running to use it's display. Remember to make the script executable.

Now, start the optimus service.

= Virtual GL = To stream frames rendered by the secondary X server, running the nVidia card, to the primary display, Virtual GL is needed. Fortunately, it is available through Portage.

Portage Installation
Unmask virtualgl, sync the portage tree and emerge the virtualgl package:

Running Applications
First, create a configuration file for VirtualGL, specifying the display used by the nVidia card, compression method, and optionally a log file. /etc/default/optimus To run applications on the nVidia card a simple shell script needs to be created. /usr/local/bin/optirun

To check that everything works just fine run glxgears.

You can also check the performance increase using the glxspheres and glxspheres64 programs supplied with VirtualGL.

Here is some sample output: fred@iguana /opt/VirtualGL/bin $ optirun ./glxspheres64 Polygons in scene: 62464 Visual ID of window: 0x21 OpenGL Renderer: GeForce GT 520M/PCI/SSE2 47.626504 frames/sec - 42.166602 Mpixels/sec 44.310995 frames/sec - 39.231182 Mpixels/sec 46.325948 frames/sec - 41.015141 Mpixels/sec 46.685491 frames/sec - 41.333467 Mpixels/sec

fred@iguana /opt/VirtualGL/bin $ ./glxspheres64 Polygons in scene: 62464 Visual ID of window: 0x92 OpenGL Renderer: Mesa DRI Intel(R) Sandybridge Mobile 30.748989 frames/sec - 27.223925 Mpixels/sec 29.870514 frames/sec - 26.446158 Mpixels/sec 31.471540 frames/sec - 27.863642 Mpixels/sec 29.858524 frames/sec - 26.435543 Mpixels/sec

= Links =
 * A good source of information on Hybrid Graphics:
 * Bumblebee project (Gentoo support added):