From ArchWiki
Revision as of 22:21, 16 May 2013 by Sas (talk | contribs) (flesh out Troubleshooting)
Jump to: navigation, search

[[Category:Networking]] [[Category:Graphics]] ***Re-enable categories when article is moved to Main namespace***

VirtualGL redirects an application's OpenGL/GLX commands to a separate X server (that has access to a 3D graphics card), captures the rendered images, and then streams them to the X server that actually handles the application.

The main use-case is to enable server-side hardware-accelerated 3D rendering for remote desktop set-ups where the X server that handles the application is either on the other side of the network (in the case of X11 forwarding), or a "virtual" X server that can't access the graphics hardware (in the case of VNC).

Installation & Setup


Using VirtualGL with X11 Forwarding


 server:                                              client:
······································               ·················
: ┌───────────┐ X11 commands         :               : ┌───────────┐ :
: │application│━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶│X server   │ :
: │           │        ┌───────────┐ :               : │           │ :
: │           │        │X server   │ :               : ├┈┈┈┈┈┈┈┈┈╮ │ :
: │ ╭┈┈┈┈┈┈┈┈┈┤ OpenGL │ ╭┈┈┈┈┈┈┈┈┈┤ : image stream  : │VirtualGL┊ │ :
: │ ┊VirtualGL│━━━━━━━▶│ ┊VirtualGL│━━━━━━━━━━━━━━━━━━▶│client   ┊ │ :      = "2D" rendering happens here
: └─┴─────────┘        └─┴─────────┘ :               : └─────────┴─┘ :      = "3D" rendering happens here
······································               ·················

Advantages of this set-up, compared to using VirtualGL with VNC:

  • seamless windows
  • uses a little less CPU resources on the server side
  • supports stereo rendering (for viewing with "3D glasses")


1. Preparation

In addition to setting up VirtualGL on the remote server as described above, this usage scenario requires you to:

  • install the virtualgl package on the client side as well (but no need to set it up like on the server side, we just need the vglconnect and vglclient binaries on this end).
  • set up SSH with X11 forwarding (confirm that connecting from the client to the server via ssh -X user@server and running GUI applications in the resulting shell works)

2. Connecting

Now you can use vglconnect on the client computer whenever you want to connect to the server:

$ vglconnect user@server     # X11 traffic encrypted, VGL image stream unencrypted
$ vglconnect -s user@server  # both X11 traffic and VGL image stream encrypted

This opens an SSH session with X11 forwarding just like ssh -X would, and also automatically starts the VirtualGL Client (vglclient) with the right parameters as a background daemon. This daemon will handle incoming VirtualGL image streams from the server, and will keep running in the background even after you close the SSH shell - you can stop it with vglclient -kill.

3. Running applications

Once connected, you can run remote applications with VirtualGL rendering enabled for their OpenGL parts, by starting them inside the SSH shell with vglrun:

$ vglrun glxgears

You don't need to restrict yourself to the shell that vglconnect opened for you; any ssh -X or ssh -Y shell you open from the same X session on the client to the same user@server should work. vglrun will detect that you are in an SSH shell, and make sure that the VGL image stream is sent over the network to the IP/hostname belonging to the SSH client (where the running vglclient instance will intercept and process it).

In this usage scenario, vglrun will by default compress its image stream using JPEG at 90% quality, but you can customize this using environment variables or command-line parameters, e.g.:

$ vglrun -q 30 -samp 4x glxgears  # use aggressive JPEG compression and subsampling (to reduce bandwidth demand)

Refer to vglrun -help and the user manual to learn about all available options. There is also a GUI dialog that lets you change the most common VirtualGL rendering/compression options for an application on the fly, after you have already started it with vglrun - simply press Template:Keypress while the application has keyboard focus, to open this dialog.

Using VirtualGL with VNC


 server:                                                           client:
···················································               ················
: ┌───────────┐ X11 commands         ┌──────────┐ : image stream  : ┌──────────┐ :
: │application│━━━━━━━━━━━━━━━━━━━━━▶│VNC server│━━━━━━━━━━━━━━━━━━▶│VNC viewer│ :
: │           │        ┌───────────┐ └──────────┘ :               : └──────────┘ :
: │           │        │X server   │        ▲     :               :              :
: │ ╭┈┈┈┈┈┈┈┈┈┤ OpenGL │ ╭┈┈┈┈┈┈┈┈┈┤ images ┃     :               :              :
: │ ┊VirtualGL│━━━━━━━▶│ ┊VirtualGL│━━━━━━━━┛     :               :              :      = "2D" rendering happens here
: └─┴─────────┘        └─┴─────────┘              :               :              :      = "3D" rendering happens here
···················································               ················

Advantages of this set-up, compared to using VirtualGL with X11 Forwarding:

  • can maintain better performance in case of low-bandwidth/high-latency networks
  • can send the same image stream to multiple clients ("desktop sharing")
  • the remote application can continue running even when the network connection drops
  • better support for non-Linux clients, as the architecture does not depend on a client-side X server


After setting up VirtualGL on the remote server as described above, and establishing a working remote desktop connection using the VNC client/server implementation of your choice, no further configuration should be needed.

Inside the VNC session (e.g. in a terminal emulator within the VNC desktop or even directly in ~/.vnc/xstartup), simply run selected applications with vglrun in order to activate VirtualGL rendering for their OpenGL parts:

$ vglrun glxgears

Note that in this usage scenario, many of vglrun's command-line options (e.g. those relating to image stream compression or stereo rendering) are not applicable, because there is no vglclient daemon running on the other end - the raw images are sent directly (through the normal X11 protocol) to the VNC server, which is running on the same machine. It is now the VNC server that handles all the image stream optimization/compression, so it is there that you should turn to for fine-tuning.

Choosing an approriate VNC package

VirtualGL can provide 3D rendering for any general-purpose vncserver implementation (e.g. TightVNC, RealVNC, ...).
However, if you want to really get good performance out of it (e.g. to make it viable to watch videos or play OpenGL games over VNC), you might want to use one of the VNC implementations that are specifically optimized for this use-case:

  • TurboVNC: Developed by the same team as VirtualGL, with the explicit goal of providing the best possible performance in combination with it.
  • TigerVNC: Also developed with VirtualGL in mind and achieves good performance with it, while providing better Xorg compatibility than TurboVNC.

Tips & Tricks

Confirming that VirtualGL rendering is active

If you set the VGL_LOGO environment variable before starting an application with vglrun, a small logo reading "VGL" will be shown in the bottom-right corner of any OpenGL scene that is rendered through VirtualGL in that application:

$ VGL_LOGO=1 vglrun glxgears

Measuring performance



Tip: Running vglrun with the +v command-line switch (or environment variable VGL_VERBOSE=1) makes VirtualGL print out some details about its attempt to initialize rendering for the application in question. The +tr switch (or variable VGL_TRACE=1) will make it print out lots of live info on intercepted OpenGL function calls during the actual rendering.
By default VirtualGL prints all its debug output to the shell - if you want to separate it from the application's own STDERR output you can set VGL_LOG=/path/to/mylogfile.

vglrun seems to have no effect

In some cases, VirtualGL may fail to take effect in an OpenGL application - i.e. 3D rendering fails with an error like Xlib: extension "GLX" missing or falls back to software rendering, just like it would when running the remote application without VirtualGL (see also #Confirming_that_VirtualGL_rendering_is_active).

This may happen when something blocks VirtualGL from getting preloaded into the application's executable(s). The way pre-loading works, is that vglrun adds the names of some VirtualGL libraries to the LD_PRELOAD environment variable before running the command that starts the application. Now when an application binary is executed as part of this command, the Linux kernel loads the dynamic linker which in turn detects the LD_PRELOAD variable and links the specified libraries into the in-memory copy of the application binary before anything else. This will obviously not work if the environment variable is not propagated to the dynamic linker, e.g. in the following cases:

  • The application is started through a script that explicitly unsets/overrides LD_PRELOAD
    Solution: Edit the script to comment out or fix the offending line. (You can put the modified script in /usr/local/bin/ to prevent it from being reverted on the next package upgrade.)
  • The application explicitly unsets LD_PRELOAD from inside the binary
    Solution: Run vglrun with the -ge command-line switch, which tries to fool the application into thinking the variable is already unset.
  • The application is started through multiple layers of scripts that fail to propagate LD_PRELOAD
    Solution: Modify the final script that actually runs the application, to make it run the application with vglrun.
  • The application is started by a loader application, in a way that fails to propagate LD_PRELOAD
    Solution: If possible, bypass the loader application and start the actual OpenGL application directly with vglrun.

vglrun fails with errors

If VirtualGL rendering does not work and you see error messages like...

ERROR: object '' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: object '' from LD_PRELOAD cannot be preloaded: ignored. the shell output, then the dynamic linker is correctly receiving instructions to preload the VirtualGL libraries into the application, but something prevents it from successfully performing this task. Two possible causes are:

  • The VirtualGL libraries for the correct architecture are not installed
    If you are using a 64-bit Arch Linux system and want to run a 32-bit application (like Wine) with VirtualGL, you need to install lib32-virtualgl from the [multilib] repository.
  • The application executable has the setuid/setgid flag set
    You can confirm whether this is the case by inspecting the executable's file permissions using ls -l: it will show the letter s in place of the user executable bit if setuid is set (for example -rwsr-xr-x), and in place of the group executable bit if setgid is set. For such an application any preloading attempts will fail, unless the libraries to be preloaded have the setuid flag set as well. You can set this flag for the VirtualGL libraries in question by executing the following as root: <pre<noinclude></noinclude>> $ chmod u+s /usr/lib/lib{rr,dl} # for the native-architecture versions provided by virtualgl $ chmod u+s /usr/lib32/lib{rr,dl} # for the multilib versions provided by lib32-virtualgl </pre<noinclude></noinclude>> However you should probably not do this on a multi-user server unless you fully understand the security implications.

rendering glitches, unusually poor performance, or application errors

OpenGL has a really low-level and flexible API, which means that different OpenGL applications may come up with very different rendering techniques. VirtualGL's default strategy for how to redirect rendering and how/when to capture a new frame works well with most interactive 3D programs, but may prove inefficient or even problematic for some applications. If you suspect that this may be the case, you can tweak VirtualGL's mode of operation by setting certain environment variables before starting your application with vglrun. For example you could try setting some of the following values (try them one at a time, and be aware that each of them could also make things worse!):

VGL_SYNC=1  # use VNC with this one, it's very slow with X11 forwarding

A few OpenGL applications also make strong assumptions about their X server environment or loaded libraries, that may not be fulfilled by a VirtualGL set-up - thus causing those applications to fail. The environment variables VGL_DEFAULTFBCONFIG, VGL_GLLIB, VGL_TRAPX11, VGL_X11LIB, VGL_XVENDOR can be used to fix this in some cases.

See the "Advanced Configuration" section in the user manual for a proper explanation of all supported environment variables, and the "Application Recipes" section for info on some specific applications that are known to require tweaking to work well with VirtualGL.

See Also