VirtualGL 2.0 User’s Guide
Intended audience: System Administrators, Graphics Programmers, Researchers, and others with knowledge of the Linux or Solaris operating systems, OpenGL and GLX, and X windows.
This document and all associated illustrations are licensed under the Creative Commons Attribution 2.5 License. Any works which contain material derived from this document must cite The VirtualGL Project as the source of the material and list the current URL for the VirtualGL web-site.
This product includes software developed by the OpenSSL Project for
use in the OpenSSL Toolkit (http://www.openssl.org/.)
Further information is contained in
LICENSE-OpenSSL.txt
,
which can be found in the same directory as this documentation.
VirtualGL is licensed under the wxWindows Library License, v3, a derivative of the LGPL.
VirtualGL is an open source package which provides hardware-accelerated 3D rendering capabilities to thin clients. Normally, when you run a Unix or Linux OpenGL application inside a thin client environment (such as VNC, remote X11, NX, etc.), the 3D application either does not work at all, is forced to use a slow software 3D renderer, or (worse) is forced to send every 3D command and piece of 3D data over the network to be rendered on the client machine. With VirtualGL, the OpenGL commands from the application are redirected onto a dedicated server’s 3D accelerator hardware. The resulting rendered images are then read back from the 3D hardware and composited into the appropriate window on the user’s desktop. This produces a completely seamless shared 3D environment that performs fast enough to take the place of a dedicated 3D workstation.
VirtualGL has two basic modes of operation:
glXSwapBuffers()
), VirtualGL
reads back the rendered 3D images from the server’s framebuffer,
compresses them using a high-speed image codec, and sends the compressed
images on a separate socket to the client. A separate VirtualGL Client
application runs on the client machine, and this client application
decompresses the image stream from the server and composites it into
the appropriate X window. Direct Mode is the fastest solution for
running VirtualGL on a local area network, and it provides the same
usability as running the application locally. Direct Mode is typically
used to run data-intensive OpenGL applications in a “cold room”
and remotely interact with these applications from a laptop or a slim
PC located elsewhere in the same building/campus. Such big data applications
often exceed the capabilities of a single PC (particularly a 32-bit
PC), and the data sizes are large enough that transmitting the data
across even the fastest of local area networks is impractical.
Server (32-bit) | Server (64-bit) | Client | |
---|---|---|---|
Recommended CPU | Pentium 4, 1.7 GHz or faster (or equivalent)
|
Pentium 4/Xeon with EM64T, or… AMD Opteron or Athlon64, 1.8 GHz or faster
|
Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent) |
Graphics | Decent 3D graphics accelerator
|
Decent 3D graphics accelerator
|
Graphics card with decent 2D performance
|
Recommended O/S |
|
|
|
Other Software | X server configured for true color (24/32-bit) | X server configured for true color (24/32-bit) | X server configured for true color (24/32-bit) |
VirtualGL should build and run on Itanium Linux, but it has not been thoroughly tested. Contact us if you encounter any difficulties.
Server (32/64-bit) | Client | |
---|---|---|
Recommended CPU | UltraSPARC III 900 MHz or faster
|
UltraSPARC III 900 MHz or faster |
Graphics | Decent 3D graphics accelerator Sun OpenGL
|
Graphics card with decent 2D performance
|
O/S | Solaris 8 or higher | Solaris 8 or higher |
Patches |
|
|
Other Software |
|
|
Server (32-bit) | Server (64-bit) | Client | |
---|---|---|---|
Recommended CPU | Pentium 4, 1.7 GHz or faster (or equivalent)
|
Pentium 4/Xeon with EM64T, or… AMD Opteron or Athlon64, 1.8 GHz or faster
|
Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent) |
Graphics | nVidia 3D graphics accelerator
|
nVidia 3D graphics accelerator
|
Graphics card with decent 2D performance
|
O/S | Solaris 10 or higher | Solaris 10 or higher | Solaris 10 or higher |
Other Software |
|
|
|
Solaris 10/x86 comes with mediaLib pre-installed, but it is strongly recommended that you upgrade this version of mediaLib to at least 2.4. This will greatly increase the performance of Solaris/x86 VirtualGL clients as well as the performance of 32-bit apps on Solaris/x86 VirtualGL servers.
Client | |
---|---|
Recommended CPU | Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent) |
Graphics | Graphics card with decent 2D performance
|
O/S | Windows 2000 or later |
Other Software |
|
Installing the VirtualGL package is necessary on any Linux machine that will act as a VirtualGL server or as a VirtualGL Direct Mode client. It is not necessary to install VirtualGL on the client machine if Raw Mode is to be used.
.tgz packages are provided for users of non-RPM platforms. You can use alien to convert these into .deb packages if you prefer.
rpm -Uvh turbojpeg*.rpm
rpm -Uvh VirtualGL*.rpm
If you are using an RPM-based distribution of Linux but there isn’t a pre-built VirtualGL RPM that matches your distribution, then you can build your own RPM using the VirtualGL Source RPM (SRPM.)
rpm -i VirtualGL*.src.rpm cd /usr/src/redhat/SPECS rpmbuild -ba virtualgl.spec
On SuSE, cd to /usr/src/packages/SPECS
instead. Some versions of SuSE symlink this to /usr/src/redhat/SPECS
.
/usr/src/redhat/RPMS/{your_cpu_architecture}
(or /usr/src/packages/RPMS/{your_cpu_architecture})
,
which you can install using the instructions from the previous section.
If you are using a non-RPM based distribution of Linux, then log in
as root, download the VirtualGL source tarball from the files area
of
http://sourceforge.net/projects/virtualgl,
uncompress it, cd vgl
, and type make install
.
Refer to BUILDING.txt
in the source directory for further
details.
If installing VirtualGL on a server which is running version 1.0-71xx
or earlier of the NVidia accelerated GLX drivers, follow the instructions
in /usr/share/doc/NVIDIA_GLX-1.0/README
regarding setting
the appropriate permissions for /dev/nvidia*
. This is
not necessary with more recent versions of the driver. cat /proc/driver/nvidia/version
to determine which version of the NVidia driver is installed on your
system.
VirtualGL requires access to the server’s 3D graphics card so that it can create off-screen pixel buffers (Pbuffers) and redirect the 3D rendering from applications into these Pbuffers. Unfortunately, accessing a 3D graphics card on Linux currently requires going through an X server. So the only way to share the server’s 3D graphics resources among multiple users is to grant those users display access to the X server that is running on the shared 3D graphics card.
It is important to understand the security risks associated with this.
Once X display access is granted to a user, there is nothing preventing
that user from logging keystrokes or reading back images from the X
display. Using xauth
, one can obtain “untrusted” X authentication
keys which prevent such exploits, but unfortunately, those untrusted
keys also disallow access to the 3D hardware. So it is necessary to
grant full trusted X access to any users that will need to run VirtualGL.
Unless you fully trust the users to whom you are granting this access,
you should avoid logging in locally to the server’s X display as root
unless absolutely necessary. Even then, it’s probably a good idea
to make sure that there are no suspicious-looking processes running
on the system prior to logging in.
This section will explain how to configure a VirtualGL server such
that select users can run VirtualGL, even if the server is sitting
at the login prompt. The basic idea is to call a script (vglgenkey
)
from the display manager’s startup script. vglgenkey
invokes xauth
to generate an authorization key for the
server’s X display, and it stores this key under /etc/opt/VirtualGL
.
The VirtualGL launcher script (vglrun
) then attempts to
read this key and merge it into the user’s .Xauthority
file, thus granting the user access to the server’s X display.
Therefore, you can control who has access to the server’s X display
simply by controlling who has read access to the /etc/opt/VirtualGL
directory.
If you prefer, you can also grant access to every authenticated user
on the server by replacing the references to vglgenkey
below with xhost +localhost
.
init 3as root
vglusers
and add any users that
need to run VirtualGL to this group.
/etc/opt/VirtualGL
and make it
readable by the vglusers
group, e.g.:
mkdir -p /etc/opt/VirtualGL chgrp vglusers /etc/opt/VirtualGL chmod 750 /etc/opt/VirtualGL
/etc/inittab
fromid:3:initdefault:
id:5:initdefault:
vglgenkeyat the top of the display manager’s startup script. The location of this script varies depending on the particular Linux distribution and display manager being used. The following table lists some common locations for this file:
xdm or kdm | gdm (default display manager on most Linux systems) |
|
---|---|---|
RedHat 7/8/9 Enterprise Linux 2.1/3 |
/etc/X11/xdm/Xsetup_0 (replace “0” with the display number of the X server you are configuring) |
/etc/X11/gdm/Init/Default (usually this is just symlinked to /etc/X11/xdm/Xsetup_0 ) |
Enterprise Linux 4 Fedora Core 1/2/3 |
/etc/X11/xdm/Xsetup_0 (replace “0” with the display number of the X server you are configuring) |
/etc/X11/gdm/Init/:0 (usually this is just symlinked to /etc/X11/xdm/Xsetup_0 ) |
SuSE/United Linux | /etc/X11/xdm/Xsetup |
/etc/opt/gnome/gdm/Init/Default |
gdm.conf
file and add the
following line under the [security]
section (or change
it if it already exists):
DisallowTCP=falseSee the table below for the location of gdm.conf on various systems.
-tst
on the command line used to launch the X server. The location of this
command line varies depending on the particular Linux distribution
and display manager being used. The following table lists some common
locations:xdm | gdm (default on most Linux systems) |
kdm | |
---|---|---|---|
RedHat (or equivalent) | /etc/X11/xdm/Xservers |
/etc/X11/gdm/gdm.conf |
/etc/X11/xdm/Xservers |
SuSE/United Linux | /etc/X11/xdm/Xservers |
/etc/opt/gnome/gdm/gdm.conf |
/etc/opt/kde3/share/config/kdm/Xservers |
-tst
to
the line corresponding to the display number you are configuring, e.g.:
:0 local /usr/X11R6/bin/X :0 vt07 -tstFor gdm-style configuration files, add
-tst
to all lines
that appear to be X server command lines, e.g.:
StandardXServer=/usr/X11R6/bin/X -tst
[server-Standard] command=/usr/X11R6/bin/X -tst -audit 0
[server-Terminal] command=/usr/X11R6/bin/X -tst -audit 0 -terminate
[server-Chooser] command=/usr/X11R6/bin/X -tst -audit 0
init 5as root.
xauth merge /etc/opt/VirtualGL/vgl_xauth_key xdpyinfo -display :0In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above. If
xdpyinfo
fails
to run, then the permissions on Display :0 are probably still too restrictive,
meaning that the change in Step 6 didn’t take for some reason.
The VirtualGL Direct Mode client for Linux can be configured to start automatically whenever an X Windows session starts. To do this, run
vglclient_config -install
as root.
Depending on your system configuration, this script will either tweak
the /etc/X11/xinit/xinitrc
file or create a link in /etc/X11/xinit/xinitrc.d
so that the VirtualGL client will be automatically started whenever
any X Windows session is started. Running vglclient_config -install
also adds a line to /etc/X11/gdm/PostSession/Default
(or
the equivalent for your system) to terminate the client whenever you
log out of the X Windows session. This is known to work on RedHat-
and SuSE-compatible platforms that use the Gnome Display Manager (gdm.)
It probably won’t work on other distributions and display managers.
To remove the auto-start feature, run
vglclient_config -remove
as root.
If vglclient_config
doesn’t work on your system,
then you can edit the appropriate X11 session files so that
/usr/bin/vglclient_daemon start
runs whenever an X session starts and
/usr/bin/vglclient_daemon stop
runs whenever the session terminates.
vglclient_daemon
will only start vglclient
if it is not already running, so starting the client in this manner
guarantees that there will never be more than one copy of it running
on the system. vglclient_daemon
should work on any Linux
platform that conforms to the Linux Standard Base (LSB.)
If additional X displays are started by the same user (:1, :2, etc.),
this will not cause additional VirtualGL client instances to start.
Only one VirtualGL client instance is needed to talk to all active
displays. However, it is important to note that all active displays
on the client machine need to be running under the same user privileges
as the VirtualGL client, or they need to grant permissions to localhost
(xhost +localhost
) so that the VirtualGL client can
access them.
If you wish to change the default port that the client listens on,
you will need to edit /usr/bin/vglclient_daemon
and pass
the appropriate argument (-port <port number>
or -sslport <port number>
) on the vglclient
command line located in that file. By default, the client will listen
on port 4242 for unencrypted connections and port 4243 for SSL connections.
As root, issue the following command:
rpm -e VirtualGL
Installing the VirtualGL package is necessary on any Solaris machine that will act as a VirtualGL server or as a VirtualGL Direct Mode client. It is not necessary to install VirtualGL on the client machine if Raw Mode is to be used.
bzip2 -d SUNWvgl-{version}.pkg.bz2 pkgadd -d SUNWvgl-{version}.pkg
SUNWvgl
package (usually option 1) from the
menu.
VirtualGL for Solaris installs into /opt/SUNWvgl
.
/etc/logindevperm
and comment out the “frame
buffers” line, e.g.:
# /dev/console 0600 /dev/fbs/* # frame buffers
/dev/fbs/*
to allow
write access to anyone who will need to use VirtualGL, e.g.:
chmod 660 /dev/fbs/* chown root /dev/fbs/* chgrp vglusers /dev/fbs/*
Explanation: Normally, when someone logs into a Solaris machine, the system will automatically assign ownership of the framebuffer devices to that user and set the permissions for the framebuffer devices to those specified in /etc/logindevperm
. The default setting in /etc/logindevperm
disallows anyone from using the framebuffer devices except the user that is logged in. But in order to run VirtualGL, a user needs write access to the framebuffer devices. So in order to make the framebuffer a shared resource, it is necessary to disable the login device permissions mechanism for the framebuffer devices and manually set the owner and group for these devices such that any VirtualGL users can write to them.
The server’s SSh daemon should have the X11Forwarding
option enabled. This is configured in sshd_config
, the
location of which varies depending on your distribution of SSh. Solaris
10 generally keeps this in /etc/ssh
, whereas Blastwave
keeps it in /opt/csw/etc
and SunFreeware keeps it in /usr/local/etc
.
If you plan to use VirtualGL only in GLP mode, then you can skip this section.
VirtualGL requires access to the server’s 3D graphics card so that it can create off-screen pixel buffers (Pbuffers) and redirect the 3D rendering from applications into these Pbuffers. Unfortunately, accessing a 3D graphics card on Solaris/x86 systems or on Solaris/Sparc systems without GLP requires going through an X server. On such systems, the only way to share the server’s 3D graphics resources among multiple users is to grant those users display access to the X server that is running on the shared 3D graphics card.
It is important to understand the security risks associated with this.
Once X display access is granted to a user, there is nothing preventing
that user from logging keystrokes or reading back images from the X
display. Using xauth
, one can obtain “untrusted”
X authentication keys which prevent such exploits, but unfortunately,
those untrusted keys also disallow access to the 3D hardware. So it
is necessary to grant full trusted X access to any users that will
need to run VirtualGL. Unless you fully trust the users to whom you
are granting this access, you should avoid logging in locally to the
server’s X display as root unless absolutely necessary. Even
then, it’s probably a good idea to make sure that there are no
suspicious-looking processes running on the system prior to logging
in.
This section will explain how to configure a VirtualGL server such
that select users can run VirtualGL, even if the server is sitting
at the login prompt. The basic idea is to call a script (vglgenkey
)
from the display manager’s startup script. vglgenkey
invokes xauth
to generate an authorization key for the
server’s X display, and it stores this key under /etc/opt/VirtualGL
.
The VirtualGL launcher script (vglrun
) then attempts to
read this key and merge it into the user’s .Xauthority
file, thus granting the user access to the server’s X display.
Therefore, you can control who has access to the server’s X display
simply by controlling who has read access to the /etc/opt/VirtualGL
directory.
If you prefer, you can also grant access to every authenticated user
on the server by replacing the references to vglgenkey
below with xhost +localhost
.
vglusers
and add any users that
need to run VirtualGL to this group.
/etc/opt/VirtualGL
and make it
readable by the vglusers
group, e.g.:
mkdir -p /etc/opt/VirtualGL chgrp vglusers /etc/opt/VirtualGL chmod 750 /etc/opt/VirtualGL
/etc/init.d/dtlogin stop
/etc/dt/config
directory does not exist, create
it.
mkdir -p /etc/dt/config
/etc/dt/config/Xsetup
does not exist, then copy the
default Xsetup
file from /usr/dt/config
to
that location:
cp /usr/dt/config/Xsetup /etc/dt/config/Xsetup
/etc/dt/config/Xsetup
, and add the following lines
to the bottom of the file:
/opt/SUNWvgl/bin/vglgenkey
/etc/dt/config/Xconfig
does not exist, then copy the
default Xconfig
file from /usr/dt/config
to that location:
cp /usr/dt/config/Xconfig /etc/dt/config/Xconfig
/etc/dt/config/Xconfig
, and add (or uncomment) the
following line:
Dtlogin*grabServer: False
Dtlogin*grabServer
option restricts X display access
to only the dtlogin
process. This is an added security
measure, since it prevents a user from attaching any kind of sniffer
program to the X display even if they have display access. But Dtlogin*grabServer
also prevents VirtualGL from using the X display to access the 3D graphics
hardware, so this option must be disabled for VirtualGL to work properly.
If the system you are configuring as a VirtualGL server is also being used as a Sun Ray server, then make these same modifications to /etc/dt/config/Xconfig.SUNWut.prototype
. Otherwise, the modifications you just made to /etc/dt/config/Xconfig
will be overwritten the next time the system is restarted.
/etc/dt/config/Xservers
does not exist, then copy the
default Xservers
file from /usr/dt/config
to that location:
cp /usr/dt/config/Xservers /etc/dt/config/Xservers
/etc/dt/config/Xservers
and add an argument of -tst
to the line corresponding to the display number you are configuring,
e.g.:
:0 Local local_uid@console root /usr/openwin/bin/Xsun :0 -nobanner -tst
If the system you are configuring as a VirtualGL server is also being used as a Sun Ray server, then make these same modifications to /etc/dt/config/Xservers.SUNWut.prototype
. Otherwise, the modifications you just made to /etc/dt/config/Xservers
will be overwritten the next time the system is restarted.
/etc/dt/config
and /etc/dt/config/Xsetup
can be executed by all users, and verify that /etc/dt/config/Xconfig
and /etc/dt/config/Xservers
can be read by all users.
/etc/init.d/dtlogin start
/usr/openwin/bin/xauth merge /etc/opt/VirtualGL/vgl_xauth_key /usr/openwin/bin/xdpyinfo -display :0In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above. If
xdpyinfo
fails
to run, then the permissions on Display :0 are probably still too restrictive,
meaning that the change in Step 7 didn’t take for some reason.
vglusers
and add any users that
need to run VirtualGL to this group.
/etc/opt/VirtualGL
and make it
readable by the vglusers
group, e.g.:
mkdir -p /etc/opt/VirtualGL chgrp vglusers /etc/opt/VirtualGL chmod 750 /etc/opt/VirtualGL
svcadm disable gdm2-login
/opt/SUNWvgl/bin/vglgenkeyto the top of the
/etc/X11/gdm/Init/Default
file.
/etc/X11/gdm/gdm.conf
and add the following line
under the [security]
section (or change it if it already
exists):
DisallowTCP=false
/etc/X11/gdm/gdm.conf
and add -tst
to all lines that appear to be X server command lines, e.g.:
StandardXServer=/usr/X11R6/bin/Xorg -tst
[server-Standard] command=/usr/X11R6/bin/Xorg -tst -audit 0
[server-Terminal] command=/usr/X11R6/bin/Xorg -tst -audit 0 -terminate
[server-Chooser] command=/usr/X11R6/bin/Xorg -tst -audit 0
svcadm enable gdm2-login
/usr/openwin/bin/xauth merge /etc/opt/VirtualGL/vgl_xauth_key /usr/openwin/bin/xdpyinfo -display :0In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above. If
xdpyinfo
fails
to run, then the permissions on Display :0 are probably still too restrictive,
meaning that the change in Step 5 didn’t take for some reason.
As root, issue the following command:
pkgrm SUNWvgl
Answer “yes” when prompted.
Installing the VirtualGL package is necessary on any Windows machine that will act as a VirtualGL Direct Mode client. It is not necessary to install VirtualGL on the client machine if Raw Mode is to be used.
VirtualGL-{version}.exe
.
C:\Program Files\Hummingbird\Connectivity\9.00\Exceed
)
to your system PATH
environment if it isn’t already
there.
If you are using the “Classic View” mode of XConfig, open the “Protocol” applet instead.
If you are using the “Classic View” mode of XConfig, open the “Performance” applet instead.
VirtualGL has the ability to take advantage of the MIT-SHM extension in Hummingbird Exceed to accelerate image drawing on Windows. This can improve the overall performance of the VirtualGL pipeline by as much as 20% in some cases.
The bad news is that this extension has some issues in earlier versions of Exceed. If you are using Exceed 8 or 9, you will need to obtain the following patches from the Hummingbird support site:
Product | Patches Required | How to Obtain |
---|---|---|
Hummingbird Exceed 8.0 | hclshm.dll v9.0.0.1 (or higher)xlib.dll v9.0.0.3 (or higher)exceed.exe v8.0.0.28 (or higher) |
Download all patches from the Hummingbird support site. (Hummingbird WebSupport account required) |
Hummingbird Exceed 9.0 | hclshm.dll v9.0.0.1 (or higher)xlib.dll v9.0.0.3 (or higher)exceed.exe v9.0.0.9 (or higher) |
exceed.exe can be patched by running Hummingbird Update.All other patches must be downloaded from the Hummingbird support site. (Hummingbird WebSupport account required) |
No patches should be necessary for Exceed 10 and above.
Next, you need to enable the MIT-SHM extension in Exceed:
If you are using the “Classic View” mode of XConfig, open the “Protocol” applet instead.
It is recommended that you use SSh (Secure Shell) to log in to the application server and launch applications. Some servers are configured to allow telnet and RSh access, but telnet and RSh both send passwords unencrypted over the network and are thus being phased out in favor of SSh. If Cygwin is already installed on your Windows VirtualGL client machine, then you can use the SSh client included in Cygwin. Otherwise, download and install PuTTY.
The VirtualGL Windows Client can be installed as a Windows service (and subsequently removed) using the links provided in the “VirtualGL Client” start menu group. Once installed, the service can be started from the Services applet in the Control Panel (located under “Administrative Tools”) or by invoking
net start vglclient
from a command prompt. The service can be subsequently stopped by invoking
net stop vglclient
If you wish to install the client as a service and have it listen on a port other than the default (4242 for unencrypted connections or 4243 for SSL connections), then you will need to install the service manually from the command line.
vglclient -?
gives a list of the relevant command-line options.
vglclienton Linux or
/opt/SUNWvgl/bin/vglclienton Solaris.
ssh -X -l {your_user_name} {server_machine_name_or_IP}
VGL_CLIENT
environment variable
on the server to point back to the client’s X display. e.g.:
export VGL_CLIENT={client_machine_name_or_IP}:0.0or
setenv VGL_CLIENT {client_machine_name_or_IP}:0.0
vglrun [vglrun options] {application_executable_or_script} {arguments}if the application server is running Linux or
/opt/SUNWvgl/bin/vglrun [vglrun options] {application_executable_or_script} {arguments}if the application server is running Solaris.
vglrun
command line options.
You may have noticed that the procedure above enables forwarding of
the X11 traffic over the SSh connection. You can also use VirtualGL
with a direct X11 connection, if you prefer, and grant the application
server access to the client machine’s X server using xhost
or xauth
. We have never observed any performance benefit
or other benefit to using a direct X11 connection, however. Additionally,
some newer Linux distributions ship with X11 TCP connections disabled,
and thus using direct X11 connections is not possible with such systems
without reconfiguring them. If you do choose to use a direct X11 connection,
then set the DISPLAY
environment variable in step 3 rather
than VGL_CLIENT
.
DISPLAY
environment to whichever display Exceed
is occupying, e.g.:
set DISPLAY=localhost:0.0
If you don’t anticipate the need to launch multiple Exceed sessions, then you can set this environment variable globally (Control Panel–>System–>Advanced.)
ssh -X -l {your_user_name} {server_machine_name_or_IP}
putty -X -l {your_user_name} {server_machine_name_or_IP}… or …
localhost:0.0
or to whichever display
number that Exceed is occupying. VGL_CLIENT
environment
variable on the server to point back to the client machine:
export VGL_CLIENT={client_machine_name_or_IP}:0.0or
setenv VGL_CLIENT {client_machine_name_or_IP}:0.0
vglrun [vglrun options] {application_executable_or_script} {arguments}if the application server is running Linux or
/opt/SUNWvgl/bin/vglrun [vglrun options] {application_executable_or_script} {arguments}if the application server is running Solaris.
vglrun
command line options.
You may have noticed that the procedure above enables forwarding of
the X11 traffic over the SSh connection. You can also use VirtualGL
with a direct X11 connection, if you prefer, and grant the application
server access to the X display using the “Security” applet
in Exceed XConfig. We have never observed any performance benefit
or other benefit to using a direct X11 connection, however. If you
do choose to use a direct X11 connection, then set the DISPLAY
environment variable in step 6 rather than VGL_CLIENT
.
It is generally a good idea to make sure that a regular X application (such as xterm) can be remotely displayed from the application server to your client workstation prior to attempting to run VirtualGL.
VirtualGL has built-in support for encrypting its compressed image
stream inside a Secure Socket Layer (SSL.) For performance reasons,
this feature is not enabled by default, but it can easily be enabled.
On the server, set the environment variable VGL_SSL
to
1
prior to launching the application, or pass an argument
of +s
to vglrun
. No action is required on
the client. The client will automatically accept SSL or unencrypted
connections, unless you have configured it otherwise (see Section
18 for more details.)
So what if your only path into the network is through SSh, perhaps through a single “gateway” machine. No problem, because SSh allows you to tunnel both incoming and outgoing TCP/IP connections on any port from one machine to another. Tunneling VirtualGL’s compressed image stream through SSh will not be as fast as using the built-in SSL encryption capabilities of VirtualGL, but sometimes it’s the only option available.
Let’s assume the following configuration:
What we want to do is tunnel both the X11 protocol stream and VirtualGL’s compressed image stream through SSh. Here’s one way to do it:
set DISPLAY=localhost:0.0(replace “:0.0” with whatever display number Exceed is parking on.)
ssh -X -R 4242:localhost:4242 username@ssh_gateway_machineThis tells SSh to tunnel all X11 traffic from your session on
ssh_gateway_machine
to your client’s display, and additionally it will tunnel all
outbound traffic to port 4242 on ssh_gateway_machine
to
inbound port 4242 on your client machine.
This command line also works with PuTTY. Just replace “ssh” with the path to the PuTTY executable. You can also configure the same thing through the PuTTY GUI as follows:
ssh_gateway_machine
.
Inside this session, issue the following command:
ssh -X -R 4242:localhost:4242 username@app_server_machineThis tells SSh to tunnel all X11 traffic from your session on
app_server_machine
to your session on ssh_gateway_machine
, where it will
be re-tunneled to the client display. Additionally, all outbound traffic
to port 4242 on app_server_machine
will be tunneled to
port 4242 on ssh_gateway_machine
, which will then re-tunnel
the traffic to inbound port 4242 on your client machine.
app_server_machine
.
Inside that session, set the environment variable VGL_CLIENT
to localhost:n.0
, where n
is the display
number of the X server running on the Client machine.
vglrun
your application.
You can of course replace port 4242 in all of the steps above with
whatever port you choose, but make sure that if you change the port,
you configure both the client and server to talk on the port you choose
(using the -port
argument to vglclient
as
well as the VGL_PORT
environment variable on the server.)
This same procedure would also work if you needed to connect directly to app_server_machine
and tunnel everything over SSh. In that case, simply leave out Step 3.
Referring to Section 2, Raw Mode is a mode in which VirtualGL bypasses its own image compressor and instead draws the rendered 3D images as uncompressed bitmaps into an X proxy. In this mode, VirtualGL relies on the X proxy to do the job of compressing and delivering images to the client(s).
When an X proxy session is started on the server, it generally chooses a unique display number (such as :1, :2, etc.) and starts a customized X server on that display number. This customized X server renders all graphical output from the application into a bitmap in memory, which it then compresses and sends to the client(s). The proxy may or may not even implement the GLX extension (VNC does not), and thus it might not have any native ability to run OpenGL applications. But since VirtualGL is designed to intercept and hand off all GLX commands to the hardware-accelerated root display (usually display :0), VirtualGL can be used as a “3D to 2D converter” to allow 3D apps to run within VNC or another X11 proxy that doesn’t natively support GLX.
TurboVNC is essentially just a version of TightVNC with optimizations to make it perform at peak efficiency with full-screen video workloads (which is, in a nutshell, what VirtualGL produces.) These optimizations include:
Other notable differences between TurboVNC and TightVNC:
-singlebuffer
to vncviewer
(or selecting the corresponding option in
the configuration GUI.) -wan
to vncviewer
(or selecting the corresponding option in the configuration GUI.)
On a local area network, TurboVNC + VirtualGL in Raw Mode can generally produce levels of performance within 80-90% of VirtualGL in Direct Mode. On a wide-area network, TurboVNC wins hands down. Direct Mode is still preferable if a seamless user experience is a requirement and if performance is critical. But if a bit of performance can be sacrificed and if collaboration and a stateless client are more important features than seamless windows, then VirtualGL+TurboVNC would be the appropriate solution. In the long term, we are looking for a way to combine the best of both solutions into one. But such is not an easy problem to solve …
TurboVNC allows VirtualGL to be used with respectable performance over low-bandwidth/high-latency networks, such as broadband or satellite. As with VirtualGL’s direct mode, the quality and subsampling of TurboVNC’s JPEG image stream can be adjusted to reduce the size of the image stream without reducing the number of image colors. TurboVNC provides a preset mode for broadband connections, which sets the quality to a low level that is noticeably lossy but still quite usable. It should be possible to redraw a 1280x1024 window at greater than 10 frames/second on a standard cable modem connection using this preset mode.
For instructions on the usage of TurboVNC, please refer to the TurboVNC man pages:
On Linux:
man -M /opt/TurboVNC/man {vncserver | Xvnc | vncviewer | vncconnect | vncpasswd}
On Solaris:
man -M /opt/SUNWtvnc/man {vncserver | Xvnc | vncviewer | vncconnect | vncpasswd}
On Windows, use the embedded help feature (the question mark button in the upper right of the window.)
The TightVNC documentation:
http://www.tightvnc.com/docs.html
might also be helpful, since TurboVNC is based on TightVNC and shares many of its features.
Raw mode is automatically enabled if VirtualGL detects that it is running
on the same machine as the X server, which it assumes to be the case
if the X display name begins with a colon (“:”) or with
“unix:
”. In most cases, this will cause VirtualGL
to automatically use raw mode when it is launched in an X proxy environment
such as VNC or NX. But you can manually enable Raw Mode by setting
the VGL_COMPRESS
environment variable to 0
on the server or passing an argument of -c 0
to vglrun
(see Section 18 for
more details.) Make sure that the DISPLAY
variable points
to whatever display number that VNC (or your X proxy of choice) is
occupying (e.g. :1, :2, etc.)
vglrun
and Solaris Shell Scriptsvglrun
can be used to launch either binary executables
or shell scripts, but there are a few things to keep in mind when using
vglrun
to launch a shell script on Solaris. When you
vglrun
a shell script, the VirtualGL faker library will
be preloaded into every executable that the script launches. Normally
this is innocuous, but if the script calls any executables that are
setuid root, then Solaris will refuse to load those executables because
you are attempting to preload a library (VirtualGL) that is not in
a “secure path.” Solaris keeps a tight lid on what goes
into /usr/lib
and /lib
, and by default, it
will only allow libraries in those paths to be preloaded into an executable
that is setuid root. Generally, 3rd party packages are verboden from
installing anything into /usr/lib
or /lib
.
But you can use the crle
utility to add other directories
to the operating system’s list of secure paths. In the case
of VirtualGL, you would issue (as root):
crle -u -s /opt/SUNWvgl/lib crle -64 -u -s /opt/SUNWvgl/lib/64
But please be aware of the security ramifications of this before you do it. You are essentially telling Solaris that you trust the security and stability of the VirtualGL code as much as you trust the security and stability of the operating system. And while we’re flattered, we’re not sure that we’re necessarily deserving of that accolade, so if you are in a security critical environment, apply the appropriate level of paranoia here.
An easier and perhaps more secure approach is to simply edit the application
script and have it issue vglrun
only for the executables
that you wish to run in the VirtualGL environment. But sometimes that
is not an option.
vglrun
on Solaris has two options that are relevant to
launching scripts:
vglrun -32 {script}
will preload VirtualGL only into 32-bit executables called by a script, whereas
vglrun -64 {script}
will preload VirtualGL only into 64-bit executables.
Sun Microsystems has developed an extension to OpenGL called GLP which allows an application to directly access the rendering capabilities of a 3D graphics card even if there is no X server running on the card. Apart from greatly simplifying the process of setting up VirtualGL on a server, GLP also greatly improves the overall security of VirtualGL servers, since it is no longer necessary to grant every user access to display :0. In addition, GLP makes it quite simple to assign VirtualGL jobs to any graphics pipe in a multi-pipe system.
Version 2.0 of VirtualGL for Sparc/Solaris can use GLP if it is available. Currently, GLP is available only in Sun OpenGL 1.5 for Sparc/Solaris.
See http://www.opengl.org/about/arb/meeting_notes/notes/glP_presentation.pdf for more details on GLP.
If GLP is supported on your application server, it can be enabled by
passing an argument of -d glp
to vglrun
,
e.g.:
/opt/SUNWvgl/bin/vglrun -d glp {application_executable_or_script} {arguments}
This will tell the VirtualGL faker to enable GLP mode and select the
first available GLP device. You can also set the VGL_DISPLAY
environment variable to glp
to achieve the same effect:
export VGL_DISPLAY=glp /opt/SUNWvgl/bin/vglrun {application_executable_or_script} {arguments}
Additionally, you can specify a specific GLP device to use for rendering:
export VGL_DISPLAY=/dev/fbs/jfb0 /opt/SUNWvgl/bin/vglrun {application_executable_or_script} {arguments}
The lion’s share of OpenGL applications are dynamically linked
against libGL.so
, and thus libGL.so
is automatically
loaded whenever the application loads. Whenever vglrun
is used to launch such applications, VirtualGL is loaded ahead of libGL.so
,
meaning that OpenGL and GLX symbols are resolved from VirtualGL first
and the “real” OpenGL library second.
However, some applications (particularly games) are not dynamically
linked against libGL.so
. These applications typically
call dlopen()
and dlsym()
later on in the
program’s execution to manually load OpenGL and GLX symbols from
libGL.so
. Such applications also generally provide a
mechanism (usually either an environment variable or a command line
argument) which allows the user to specify a library that can be loaded
instead of libGL.so
.
So let’s assume that you just downloaded the latest version of
the Linux game Foo Wars from the Internet, and (for whatever reason)
you want to run the game in a VNC session. The game provides a command
line switch -g
which can be used to specify an OpenGL
library to load other than libGL.so
. You would launch
the game using a command line such as this:
vglrun foowars -g /usr/lib/librrfaker.so
You still need to use vglrun
to launch the game, because
VirtualGL must also intercept a handful of X11 calls. Using vglrun
allows VGL to intercept these calls, whereas using the game’s
built-in mechanism for loading a substitute OpenGL library allows VirtualGL
to intercept the GLX and OpenGL calls.
In some cases, the application doesn’t provide an override mechanism
such as the above. In these cases, you should pass an argument of
-dl
to vglrun
when starting the application,
e.g.:
vglrun -dl foowars
Passing -dl
to vglrun
forces another library
to be loaded ahead of VirtualGL and libGL.so
. This new
library intercepts any calls to dlopen()
and forces the
application to open VirtualGL instead of libGL.so
.
Section 14 contains specific recipes for getting a variety of games and other applications to work with VirtualGL.
Chromium is a powerful framework for performing various types of parallel OpenGL rendering. It is usually used on clusters of commodity Linux PC’s to divide up the task of rendering scenes with large geometries or large pixel counts (such as when driving a display wall.) Chromium is most often used in one of three configurations:
Sort-First Rendering (Image-Space Decomposition) is used to overcome the fill-rate limitations of individual graphics cards. When configured to use sort-first rendering, Chromium divides up the scene based on which polygons will be visible in a particular section of the final image. It then instructs each node of the cluster to render only the polygons that are necessary to generate the image section (“tile”) for that node. This is primarily used to drive high-resolution displays that would be impractical to drive from a single graphics card due to limitations in the card’s framebuffer memory, processing power, or both. Configuration 1 could be used, for instance, to drive a CAVE, video wall, or even an extremely high-resolution monitor. In this configuration, each Chromium node generally uses all of its screen real estate to render a section of the multi-screen image.
VirtualGL is generally not very useful with Configuration 1. You could
theoretically install a separate copy of VirtualGL on each display
node and use it to redirect the output of each crserver
instance to a multi-screen X server running elsewhere on the network.
But there would be no way to synchronize the screens on the remote
end. Chromium uses DMX to synchronize the screens in a multi-screen
configuration, and VirtualGL would have to be made DMX-aware for it
to perform the same job. Maybe at some point in the future …
If you have a need for such a configuration,
let
us know.
Configuration 2 uses the same sort-first principle as Configuration 1, except that each tile is only a fraction of a single screen, and the tiles are recombined into a single window on Node 0. This configuration is perhaps the least often used of the three, but it is useful in cases where the scene contains a large amount of textures (such as in volume rendering) and thus rendering the whole scene on a single node would be prohibitively slow due to fill-rate limitations.
In this configuration, the application is allowed to choose a visual,
create an X window, and manage the window as it would normally do.
But all other OpenGL and GLX activity is intercepted by the Chromium
App Faker (CrAppFaker) so that the rendering task can be split up among
the rendering nodes. Once each node has rendered its section of the
final image, the tiles get passed back to a Chromium Server (CrServer)
process running on Node 0. This CrServer process attaches to the previously-created
application window and draws the pixels into it using glDrawPixels()
.
The general strategy for making this work with VirtualGL is to first
make it work without VirtualGL and then insert VirtualGL only into
the processes that run on Node 0. VirtualGL must be inserted into
the CrAppFaker process to prevent CrAppFaker from sending glXChooseVisual()
calls to the X server (which would fail if the X server is a VNC server
or otherwise does not provide GLX.) VirtualGL must be inserted into
the CrServer process on Node 0 to prevent it from sending glDrawPixels()
calls to the X server (which would effectively send uncompressed images
over the network.) Instead, VirtualGL forces CrServer to draw into
a Pbuffer, and VGL takes charge of transmitting those pixels to the
destination X server in the most efficient way possible.
Since Chromium uses dlopen()
to load the system’s
OpenGL library, preloading VirtualGL into the CrAppFaker and CrServer
processes using vglrun
is not sufficient. Fortunately,
Chromium provides an environment variable, CR_SYSTEM_GL_PATH
,
which allows one to specify an alternate path in which it will search
for the system’s libGL.so
. The VirtualGL packages
for Linux and Solaris include a symbolic link named libGL.so
which really points to the VirtualGL faker library (librrfaker.so
)
instead. This symbolic link is located in its own isolated directory,
so that directory can be passed to Chromium in the CR_SYSTEM_GL_PATH
environment variable, thus causing Chromium to load VirtualGL rather
than the “real” OpenGL library. Refer to the following
table:
32-bit Applications | 64-bit Applications | |
---|---|---|
Linux | /opt/VirtualGL/lib |
/opt/VirtualGL/lib64 |
Solaris | /opt/SUNWvgl/fakelib |
/opt/SUNWvgl/fakelib/64 |
CR_SYSTEM_GL_PATH
setting required to use VirtualGL with ChromiumRunning the CrServer in VirtualGL is simply a matter of setting this
environment variable and then invoking crserver
with vglrun
,
e.g.:
export CR_SYSTEM_GL_PATH=/opt/VirtualGL/lib vglrun crserver
In the case of CrAppFaker, it is also necessary to set VGL_GLLIB
to the location of the “real” OpenGL library, e.g. /usr/lib/libGL.so.1
.
CrAppFaker creates its own fake version of libGL.so
which
is really just a copy of Chromium’s libcrfaker.so
.
So VirtualGL, if left to its own devices, will unwittingly try to load
libcrfaker.so
instead of the “real” OpenGL
library. Chromium’s libcrfaker.so
will in turn
try to load VirtualGL again, and an endless loop will occur.
So what we want to do is something like this:
export CR_SYSTEM_GL_PATH=/opt/VirtualGL/lib export VGL_GLLIB=/usr/lib/libGL.so.1 crappfaker
CrAppFaker will copy the application to a temp directory and then copy
libcrfaker.so
to that same directory, renaming it as libGL.so
.
So when the application is started, it loads libcrfaker.so
instead of libGL.so
. libcrfaker.so
will then
load VirtualGL instead of the “real” libGL, because we’ve
overridden CR_SYSTEM_GL_PATH
to make Chromium find VirtualGL’s
fake libGL.so
first. VirtualGL will then use the library
specified in VGL_GLLIB
to make any “real”
OpenGL calls that it needs to make.
Note that crappfaker
should not be invoked with vglrun
.
So, putting this all together, here is an example of how you might start a sort-first rendering job using Chromium and VirtualGL:
crserver
on each of the rendering nodes
CR_SYSTEM_GL_PATH
to the appropriate value
for the operating system and application type (see table above)
vglrun crserver &
VGL_GLLIB
to the location of the “real”
libGL, e.g. /usr/lib/libGL.so.1
or /usr/lib64/libGL.so.1
crappfaker
(do not use vglrun
here)
Again, it’s always a good idea to make sure this works without VirtualGL before adding VirtualGL into the mix.
When using VirtualGL with this mode, resizing the application window may not work properly. This is because the resize event is sent to the application process, and therefore the CrServer process that’s actually drawing the pixels has no way of knowing that a window resize has occurred. A possible fix is to modify Chromium such that it propagates the resize event down the render chain so that all of the CrServer processes are aware that a resize event occurred.
Sort-Last Rendering is used when the scene contains a huge number of polygons and the rendering bottleneck is processing all of that geometry on a single graphics card. In this case, each node runs a separate copy of the application, and for best results, the application needs to be at least partly aware that it’s running in a parallel environment so that it can give Chromium hints as to how to distribute the various objects to be rendered. Each node generates an image of a particular portion of the object space, and these images must be composited in such a way that the front-to-back ordering of pixels is maintained. This is generally done by collecting Z buffer data from each node to determine whether a particular pixel on a particular node is visible in the final image. The rendered images from each node are often composited using a “binary swap”, whereby the nodes combine their images in a cascading tree so that the overall compositing time is proportional to log2(N) rather than N.
To make this configuration work with VirtualGL:
crappfaker
on each of the rendering nodes
CR_SYSTEM_GL_PATH
to the appropriate value
for the operating system and application type (see table in Section
13.2.)
vglrun crserver
The Chromium Utility Toolkit provides a convenient way for graphics
applications to specifically take advantage of Chromium’s sort-last
rendering capabilities. Such applications can use CRUT to explicitly
specify how their object space should be decomposed. CRUT applications
require an additional piece of software, crutserver
, to
be running on Node 0. So to make such applications work with VirtualGL:
crappfaker
on each of the rendering nodes
CR_SYSTEM_GL_PATH
to the appropriate value
for the operating system and application type (see table in Section
13.2.)
vglrun crutserver &
vglrun crserver
Chromium’s use of X11 is generally not very optimal. It assumes a very fast connection between the X server and the Chromium Server. In certain modes, Chromium polls the X server on every frame to determine whether windows have been resized, etc. Thus, we have observed that, even on a fast network, Chromium tends to perform much better with VirtualGL running in a TurboVNC session as opposed to VirtualGL running in Direct Mode.
ModViz Virtual Graphics PlatformTM is a polished commercial clustered rendering framework for Linux which supports all three of the rendering modes described above and provides a much more straightforward interface to configure and run these types of parallel rendering jobs.
All VGP jobs, regardless of configuration, are all spawned through
vglauncher
, a front-end program which automatically takes
care of starting the appropriate processes on the rendering nodes,
intercepting OpenGL calls from the application instance(s), sending
rendered images back to Node 0, and compositing the images as appropriate.
In a similar manner to VirtualGL’s vglrun
, VGP’s
vglauncher preloads a library (libVGP.so
) in place of
libGL.so
, and this library intercepts the OpenGL calls
from the application.
So our strategy here is similar to our strategy for loading the Chromium
App Faker. We want to insert VirtualGL between VGP and the real system
OpenGL library, so that VGP will call VirtualGL and VirtualGL will
call libGL.so
. Achieving this with VGP is relatively simple:
export VGP_BACKING_GL_LIB=librrfaker.so vglrun vglauncher --preload=librrfaker.so:/usr/lib/libGL.so {application}
Replace /usr/lib/libGL.so
with the full path of your system’s
OpenGL library (/usr/lib64/libGL.so
if you are launching
a 64-bit application.)
Application | Platform | Recipe | Notes |
---|---|---|---|
Army Ops | Linux/x86 | vglrun -dl armyops |
See Section 12 for more details |
Descent 3 | Linux/x86 | vglrun descent3 -g /usr/lib/librrfaker.so or vglrun -dl descent3 |
See Section 12 for more details |
Doom 3 | Linux/x86 | vglrun doom3 +set r_glDriver /usr/lib/librrfaker.so or vglrun -dl doom3 |
See Section 12 for more details |
Enemy Territory (Return to Castle Wolfenstein) | Linux/x86 | vglrun et +set r_glDriver /usr/lib/librrfaker.so or vglrun -dl et |
See Section 12 for more details |
Heretic II | Linux/x86 | vglrun heretic2 +set gl_driver /usr/lib/librrfaker.so +set vid_ref glx or vglrun -dl heretic2 +set vid_ref glx |
See Section 12 for more details |
Heavy Gear II | Linux/x86 | vglrun hg2 -o /usr/lib/librrfaker.so or vglrun -dl hg2 |
See Section 12 for more details |
I-deas Master Series 9, 10, & 11 | Solaris/Sparc | When running I-deas with VirtualGL on a Solaris/Sparc server, remotely displaying to a non-Sparc client machine or to an X proxy such as VNC, it may be necessary to set the SDRC_SUN_IGNORE_GAMMA environment variable to 1 . |
I-deas normally aborts if it detects that the X visual assigned to it is not gamma-corrected. But gamma-corrected X visuals only exist on Solaris/Sparc X servers, so if you are displaying the application using another type of X server or X proxy which doesn’t provide gamma-corrected X visuals, then it is necessary to override the gamma detection mechanism in I-deas. |
Java2D applications that use OpenGL | Linux, Solaris | Java2D will use OpenGL to perform its rendering if sun.java2d.opengl is set to True , e.g.: java -Dsun.java2d.opengl=True MyAppClass In order for this to work in VirtualGL, it is necessary to invoke vglrun with the -dl switch, e.g.: vglrun -dl java -Dsun.java2d.opengl=True MyAppClass If you are using Java v6 b92 or later, you can also set the environment variable J2D_ALT_LIBGL_PATH to the path of librrfaker.so , e.g.: setenv J2D_ALT_LIBGL_PATH /opt/SUNWvgl/lib/librrfaker.so vglrun java -Dsun.java2d.opengl=True MyAppClass |
See Section 12 for more details |
Pro/ENGINEER Wildfire v2.0 | Solaris/Sparc | Add graphics opengl to ~/config.pro . You may also need to set the VGL_XVENDOR environment variable to "Sun Microsystems, Inc." if you are running Pro/ENGINEER 2.0 over a remote X connection to a Linux or Windows VirtualGL client. |
Pro/E 2.0 for Solaris will disable OpenGL if it detects a remote connection to a non-Sun X server. |
QGL (OpenGL Qt Widget) | Linux | vglrun -dl {application} |
Qt can be built such that it either resolves symbols from libGL automatically or uses dlopen() to manually resolve those symbols from libGL. As of Qt v3.3, the latter behavior is the default, so OpenGL programs built with later versions of libQt will not work with VirtualGL unless the -dl switch is used with vglrun . See Section 12 for more details |
Quake 3 | Linux/x86 | vglrun quake3 +set r_glDriver /usr/lib/librrfaker.so or vglrun -dl quake3 |
See Section 12 for more details |
Soldier of Fortune | Linux/x86 | vglrun sof +set gl_driver /usr/lib/librrfaker.so or vglrun -dl sof |
See Section 12 for more details |
Unreal Tournament 2004 | Linux/x86 | vglrun -dl ut2004 |
See Section 12 for more details |
VisConcept | Solaris/Sparc | Set the environment variable VGL_GUI_XTTHREADINIT to 0 . |
Popping up the VirtualGL configuration dialog may cause the application to hang unless you set this environment variable. See Section 18.1 for more details. |
The general idea behind VirtualGL is to offload the 3D rendering work
to the server so that the client only needs the ability to draw 2D
images. But unfortunately, there is no way to draw stereo images using
2D (X11) commands, so the VirtualGL client must use OpenGL to draw
in stereo. When an application requests a stereo visual, VirtualGL
will attempt to ascertain whether the client supports OpenGL and, if
so, whether it has stereo visuals available. VirtualGL then checks
the server’s display to see whether it has stereo visuals available
as well. If both are true, then VirtualGL will return a stereo visual
to the application. If, for any given frame, VirtualGL detects that
the application has drawn something to one of the right eye buffers,
it will read back both eye buffers and send the contents as a pair
of compressed images (one for each eye) to the VirtualGL client. The
VGL client then decompresses the stereo image pair and draws it as
a single stereo frame to the client’s display using glDrawPixels()
.
The upshot of this is that, in order to use stereo in VirtualGL, the
client machine must support OpenGL and GLX (Exceed 3D is required for
Windows clients) and must have a graphics card (such as the nVidia
Quadro, etc.) which is capable of drawing in stereo. It is usually
necessary to explicitly enable stereo visuals in the graphics card
configuration for both the client and server machines. Use glxinfo
to verify whether or not stereo visuals are enabled on both client
and server before attempting to run VirtualGL with a stereo application.
Stereo requires Direct Mode. If VirtualGL is running in Raw Mode and the application renders something in stereo, only the contents of the left eye buffer will be sent to the display.
Transparent overlays have similar requirements and restrictions to
stereo. In this case, VirtualGL completely bypasses its own GLX faker
and uses indirect OpenGL rendering to render the transparent overlay
on the client machine’s 3D hardware. The underlay is still rendered
on the server, as always. Using indirect rendering to render the overlay
is unfortunately necessary, because there is no reliable way to draw
to an overlay using 2D (X11) functions, there are severe performance
issues (on some cards) with using glDrawPixels()
to draw
to the overlay, and there is no reasonable way to composite the overlay
and underlay on the server machine.
However, overlays are generally used only for drawing small, simple, static shapes and text, so we have found that it is usually faster to send the overlay geometry over to the client rather than rendering it as an image and sending the image. So even if it were possible to implement overlays without using indirect rendering, it’s likely that indirect rendering would still be the fastest approach.
As with stereo, overlays often must be explicitly enabled in the graphics card’s configuration. In the case of overlays, however, they need only be supported and enabled on the client machine.
Indexed color (8-bit) overlays have been tested and are known to work
with nVidia Quadro hardware. True color (24-bit) overlays will probably
work as well, but they have not been tested. Use glxinfo
to verify whether your client’s X display supports overlays and
whether they are enabled. In Exceed 3D, make sure that the “Overlay
Support” option is checked in the “Exceed 3D and GLX”
applet:
As with stereo, overlays do not work inside an X proxy session. VirtualGL must be displaying to a real X server on the client machine.
In a PseudoColor visual, each pixel is represented by an index which refers to a location in a color table. The color table stores the actual color values (256 of them in the case of 8-bit PseudoColor) which correspond to each index. An application merely tells the X server which color index to use when drawing, and the X server takes care of mapping that index to an actual color from the color table. OpenGL allows for rendering to Pseudocolor visuals, and it does so by being intentionally ignorant of the relationship between indices and actual colors. As far as OpenGL is concerned, each color index value is just a meaningless number, and it is only when the final image is drawn by the X server that these numbers take on meaning. As a result, many pieces of OpenGL’s core functionality, such as lighting and shading, either have undefined behavior or do not work at all with PseudoColor rendering. PseudoColor rendering used to be a common technique to visualize scientific data, because such data often only contained 8 bits per sample to begin with. Applications could manipulate the color table to allow the user to dynamically control the relationship between sample values and colors. As more and more graphics cards drop support for PseudoColor rendering, however, the applications which use it are a vanishing breed.
VirtualGL supports PseudoColor rendering if a PseudoColor visual is
available on the client’s display. A PseudoColor visual need
not be present on the server. On the server, VirtualGL uses the red
channel of a standard RGB Pbuffer to store the color index. Upon receiving
an end of frame trigger, VirtualGL reads back the red channel of the
Pbuffer and uses XPutImage()
to draw it into the appropriate
window. The upshot of this is that there is no compression with PseudoColor
rendering in VirtualGL. However, since there is only 1 byte per pixel
in this mode, the images can still be sent to the client reasonably
quickly even though they are uncompressed.
PseudoColor rendering should work in VNC, provided that the VNC server is configured for 8-bit Pseudocolor. TurboVNC does not support PseudoColor, but RealVNC and other VNC flavors do. Note, however, that VNC cannot provide both PseudoColor and TrueColor visuals at the same time.
The easiest way to uncover bottlenecks in the VirtualGL pipeline is
to set the VGL_PROFILE
environment variable to 1
on both server and client (passing an argument of +pr
to vglrun
on the server has the same effect.) This will
cause VirtualGL to measure and report the throughput of the various
stages in its pipeline. For example, here are some measurements from
a dual Pentium 4 server communicating with a Pentium III client on
a 100 Mbit LAN:
Readback - 43.27 Mpixels/sec - 34.60 fps Compress 0 - 33.56 Mpixels/sec - 26.84 fps Total - 8.02 Mpixels/sec - 6.41 fps - 10.19 Mbits/sec (18.9:1)
Decompress - 10.35 Mpixels/sec - 8.28 fps Blit - 35.75 Mpixels/sec - 28.59 fps Total - 8.00 Mpixels/sec - 6.40 fps - 10.18 Mbits/sec (18.9:1)
The total throughput of the pipeline is 8.0 Mpixels/sec, or 6.4 frames/sec, indicating that our frame is 8.0 / 6.4 = 1.25 Megapixels in size (a little less than 1280 x 1024 pixels.) The readback and compress stages, which occur in parallel on the server, are obviously not slowing things down. And we’re only using 1/10 of our available network bandwidth. So we look to the client and discover that its slow decompression speed is the primary bottleneck. Decompression and blitting on the client do not occur in parallel, so the aggregate performance is the harmonic mean of the two rates: [1/ (1/10.35 + 1/35.75)] = 8.0 Mpixels/sec.
By default, VirtualGL will only send a frame to the client if the client is ready to receive it. If a rendered frame arrives at the server’s queue and a previous frame is still being processed, the new frame is dropped (“spoiled.”) This prevents a backlog of frames on the server, which would cause a perceptible delay in the responsiveness of interactive applications. But when running non-interactive applications, particularly benchmarks, it may be desirable to disable frame spoiling. With frame spoiling disabled, the server will render frames only as quickly as the VirtualGL pipeline can receive them, which will conserve server resources as well as allow OpenGL benchmarks to accurately measure the throughput of the VirtualGL pipeline. With frame spoiling enabled, these benchmarks will report meaningless data, since they are measuring the server’s rendering rate, and that rendering rate is decoupled from the overall throughput of VirtualGL.
To disable frame spoiling, set the VGL_SPOIL
environment
variable to 0
on the server or pass an argument of -sp
to vglrun
. See Section 18.1
for more details.
VirtualGL includes several tools which can be useful in diagnosing performance problems with the system.
NetTest is a low-level network benchmark that uses the same network
classes as VirtualGL. It can be used to test the latency and throughput
of any TCP/IP connection, with or without SSL encryption. The VirtualGL
Linux package installs NetTest in /opt/VirtualGL/bin
.
The VirtualGL Solaris package installs it in /opt/SUNWvgl/bin
.
The Windows installer installs it in c:\program files\VirtualGL-{version}-{build}
by default.
To use NetTest, first start up the nettest server on one end of the connection:
nettest -server [-ssl]
(use -ssl
if you want to test the performance of SSL encryption
over this particular connection.)
Next, start the client on the other end of the connection:
nettest -client {server_name} [-ssl]
(server_name
is the hostname or IP address of the machine
where the NetTest server is running. Use -ssl
if the
NetTest server is running in SSL mode.)
The nettest client will produce output similar to the following:
TCP transfer performance between localhost and {server}: Transfer size 1/2 Round-Trip Throughput (bytes) (msec) (MB/sec) 1 0.176896 0.005391 2 0.179391 0.010632 4 0.181600 0.021006 8 0.181292 0.042083 16 0.181694 0.083981 32 0.181690 0.167965 64 0.182010 0.335339 128 0.182197 0.669991 256 0.183593 1.329795 512 0.183800 2.656586 1024 0.186189 5.245015 2048 0.379702 5.143834 4096 0.546805 7.143778 8192 0.908712 8.597335 16384 1.643810 9.505359 32768 2.961701 10.551368 65536 5.769007 10.833754 131072 11.313003 11.049232 262144 22.412990 11.154246 524288 44.760510 11.170561 1048576 89.294810 11.198859 2097152 178.426602 11.209091 4194304 356.547194 11.218711
We can see that the throughput peaks out at about 11.2 MB/sec. 1 MB = 1048576 bytes, so 11.2 MB/sec = 94 million bits per second, which is pretty good for a 100 Mbit connection. We can also see that, as the transfer size decreases, the round-trip time becomes dominated by latency. The latency is the same thing as the 1/2 round-trip time for a zero-byte packet, which is about 0.18 ms in this case.
CPUstat is available only in the VirtualGL Linux packages and is located
in the same place as NetTest (/opt/VirtualGL/bin
.) It
measures the average, minimum, and peak CPU usage for all processors
combined and for each processor individually. On Windows, this same
functionality is provided in the Windows Performance Monitor, which
is part of the operating system.
CPUstat measures the CPU usage over a given sample period (a few seconds) and continuously reports how much the CPU was utilized since the last sample period. Output for a particular sample looks something like this:
ALL : 51.0 (Usr= 47.5 Nice= 0.0 Sys= 3.5) / Min= 47.4 Max= 52.8 Avg= 50.8 cpu0: 20.5 (Usr= 19.5 Nice= 0.0 Sys= 1.0) / Min= 19.4 Max= 88.6 Avg= 45.7 cpu1: 81.5 (Usr= 75.5 Nice= 0.0 Sys= 6.0) / Min= 16.6 Max= 83.5 Avg= 56.3
The first column indicates what percentage of time the CPU was active since the last sample period (this is then broken down into what percentage of time the CPU spent running user, nice, and system/kernel code.) “ALL” indicates the average utilization across all CPU’s since the last sample period. “Min”, “Max”, and “Avg” indicate a running minimum, maximum, and average of all samples since cpustat was started.
Generally, if an application’s CPU usage is fairly steady, you can run CPUstat for a bit and wait for the Max. and Avg. for the “ALL” category to stabilize, then that will tell you what the application’s peak and average % CPU utilization is.
TCBench was born out of the need to compare VirtualGL’s performance to other thin client packages, some of which had frame spoiling features that couldn’t be disabled. TCBench measures the frame rate of a thin client system as seen from the client’s point of view. It does this by attaching to one of the client windows and continuously reading back a small area at the center of the window. While this may seem to be a somewhat non-rigorous test, experiments have shown that if care is taken to make sure that the application is updating the center of the window on every frame (such as in a spin animation), TCBench can produce quite accurate results. It has been sanity checked with VirtualGL’s internal profiling mechanism and with a variety of system-specific techniques, such as monitoring redraw events on the client’s windowing system.
The VirtualGL Linux package installs TCBench in /opt/VirtualGL/bin
.
The VirtualGL Solaris package installs TCBench in /opt/SUNWvgl/bin
.
The Windows installer installs it in c:\program files\VirtualGL-{version}-{build}
by default. Run tcbench
from the command line, and it
will prompt you to click in the window you want to measure. That
window should already have an automated animation of some sort running
before you launch TCBench.
TCBench can also be used to measure the frame rate of applications that are running on the local console, although for extremely fast applications (those that exceed 40 fps on the local console), you may need to increase the sampling rate of TCBench to get accurate results. The default sampling rate of 50 samples/sec should be fine for measuring the throughput of VirtualGL and other thin client systems.
tcbench -?
gives the relevant command line switches that can be used to adjust the benchmark time, the sampling rate, and the x and y offset of the sampling area within the window.
Several of VirtualGL’s configuration parameters can be changed
on the fly once an application has started. This is accomplished by
using the VirtualGL configuration dialog, which can be activated by
holding down the CTRL
and SHIFT
keys and
pressing the F9
key while any one of the application’s
windows is active. This displays a dialog box similar to the following:
You can use this dialog to enable or disable frame spoiling or to adjust the JPEG quality and subsampling. Changes are reflected immediately in the application.
The JPEG quality and subsampling gadgets will only be shown if VirtualGL is running in direct mode. In raw mode, the only setting that can be changed with this dialog is frame spoiling.
The VGL_GUI
environment variable can be used to change
the key sequence used to pop up the dialog box. If the default of
CTRL-SHIFT-F9
is not suitable, then set VGL_GUI
to any combination of ctrl
, shift
, alt
,
and one of {f1, f2,..., f12}
(these are not
case sensitive.) e.g.
export VGL_GUI=CTRL-F9
will cause the dialog box to pop up whenever CTRL-F9
is
pressed.
To disable the VirtualGL dialog altogether, set VGL_GUI
to none
.
VirtualGL monitors the application’s X event loop to determine whenever a particular key sequence has been pressed. If an application is not monitoring key press events in its X event loop, then the VirtualGL configuration dialog might not pop up at all. There is unfortunately no workaround for this, but it should be a rare occurrence.
You can control the operation of the VirtualGL faker in four different ways. Each method of configuration takes precedence over the previous method:
/etc/profile
)
~/.bashrc
)
export VGL_XXX={whatever}
)
vglrun
.
This effectively overrides any previous environment variable setting
corresponding to that configuration option.
Environment Variable Name | vglrun Command-Line Override |
Description | Default Value |
---|---|---|---|
VGL_CLIENT |
-cl <client display> |
The X display where VirtualGL should send its image stream When running in Direct Mode, VirtualGL uses a dedicated TCP/IP connection to transmit compressed images of an application’s OpenGL rendering area from the application server to the client display. Thus, the server needs to know on which machine the VirtualGL client software is running, and it needs to know which X display on that machine will be used to draw the application’s GUI. VirtualGL can normally surmise this by reading the application server’s DISPLAY environment variable. But in cases where X11 traffic is tunneled through SSh or LBX or another type of indirect X11 connection, the DISPLAY environment variable on the application server may not point to the client machine. In these cases, set VGL_CLIENT to the display where the application’s GUI will end up, e.g. export VGL_CLIENT={my_client_machine}:0.0 ** This option has no effect in Raw Mode. ** |
Read from the DISPLAY environment |
VGL_COMPRESS=0 VGL_COMPRESS=1 |
-c <0, 1> |
0 = Raw Mode (send rendered images uncompressed via. X11), 1 = Direct Mode (compress rendered images as JPEG & send on a separate socket) When this option is set to 0, VirtualGL will bypass its internal image compression pipeline and instead use XPutImage() to composite the rendered 3D images into the appropriate application window. This mode (“Raw Mode”) is primarily useful in conjunction with VNC, NX, or other remote display software that performs X11 rendering on the server and uses its own mechanism for compressing and transporting images to the client. Enabling Raw Mode on a remote X11 connection is not advisable, since it will result in uncompressed images being sent over the network. If this option is not specified, then VirtualGL’s default behavior is to use Direct Mode when the application is being displayed to a remote X server and to use Raw Mode otherwise. VirtualGL assumes that if the DISPLAY environment variable begins with a colon or with “unix: ” (e.g. “:0.0 ”, “unix:1000.0 ”, etc.), then the X11 connection is local and thus doesn’t require image compression. Otherwise, it assumes that the X11 connection is remote and that compression is required. If the display string begins with “localhost ” or with the server’s hostname, VGL assumes that the display is being tunneled through SSh, and it enables Direct Mode in this case. NOTE: Stereo does not work with Raw Mode. See Section 9 for more details. |
Compression enabled (“Direct Mode”) if the application is displaying to a remote X server, disabled (“Raw Mode”) otherwise |
VGL_DISPLAY |
-d <display or GLP device> |
The display or GLP device to use for 3D rendering If your server has multiple 3D graphics cards and you want the OpenGL rendering to be redirected to a display other than :0, set VGL_DISPLAY=:1.0 or whatever. This could be used, for instance, to support many application instances on a beefy multi-pipe graphics server. GLP mode (Solaris/Sparc only): Setting this option to GLP will enable GLP mode and select the first available GLP device for rendering. You can also set this option to the pathname of a specific GLP device (e.g. /dev/fbs/jfb0 .) GLP is a special feature of Sun’s OpenGL library which allows an application to render into Pbuffers on a graphics card even if there is no X server running on that graphics card. See Section 11 for more details on GLP. |
:0 |
VGL_FPS |
-fps <floating point number greater than 0> |
Limit the client/server frame rate to the specified number of frames per second Setting VGL_FPS or passing -fps as an argument to vglrun will enable VirtualGL’s frame rate governor. When enabled, the frame rate governor will attempt to limit the overall throughput of the VirtualGL pipeline to the specified number of frames/second. If frame spoiling is disabled, this effectively limits the server’s rendering frame rate as well. This option applies regardless of whether VirtualGL is being run in Direct Mode (with compression enabled) or in Raw Mode (with compression disabled.) |
Frame rate governor disabled |
VGL_GAMMA=0 VGL_GAMMA=1 VGL_GAMMA=<gamma correction factor > |
-g or +g or -gamma <gamma correction factor> |
“Gamma” refers to the relationship between the intensity of light which your computer’s monitor is instructed to display and the intensity which it actually displays. The curve is an exponential curve of the form Y = XG, where X is between 0 and 1. G is called the “gamma” of the monitor. PC monitors and TV’s usually have a gamma of around 2.2. Some of the math involved in 3D rendering assumes a linear gamma (G = 1.0), so technically speaking, 3D applications will not display with mathematical correctness unless the pixels are “gamma corrected” to counterbalance the non-linear response curve of the monitor. But some systems do not have any form of built-in gamma correction, and thus the applications developed for such systems have usually been designed to display properly without gamma correction. Gamma correction involves passing pixels through a function of the form X = W1/G, where G is the “gamma correction factor” and should be equal to the gamma of the monitor. So the final output is Y = XG = (W1/G)G = W, which describes a linear relationship between the intensity of the pixels drawn by the application and the intensity of the pixels displayed by the monitor. VGL_GAMMA=1 or vglrun +g : Enable gamma correction with default settings This option tells VirtualGL to enable gamma correction using the best available method. If VirtualGL is remotely displaying to a Solaris/Sparc X server which has gamma-corrected X visuals, then VGL will attempt to assign one of these visuals to the application. This causes the 3D output of the application to be gamma corrected by the factor specified in fbconfig on the client machine (default: 2.22.) Otherwise, if the X server (or proxy) does not have gamma-corrected X visuals or if the gamma-corrected visuals it has do not match the application’s needs, then VirtualGL performs gamma correction internally and uses a default gamma correction factor of 2.22. This option emulates the default behavior of OpenGL applications running locally on Sparc machines. VGL_GAMMA=0 or vglrun -g : Disable gamma correction This option tells VGL not to use gamma-corrected visuals, even if they are available on the X server, and disables VGL’s internal gamma correction system as well. This emulates the default behavior of OpenGL applications running locally on Linux or Solaris/x86 machines. VGL_GAMMA={gamma correction factor} or vglrun -gamma {gamma correction factor} : Enable VGL’s internal gamma correction system with the specified gamma correction factor If VGL_GAMMA is set to an arbitrary floating point value, then VirtualGL performs gamma correction internally using the specified value as the gamma correction factor. You can also specify a negative value to apply a “de-gamma” function. Specifying a gamma correction factor of G (where G < 0) is equivalent to specifying a gamma correction factor of -1/G. |
VGL_GAMMA=1 on Solaris/Sparc VGL servers, VGL_GAMMA=0 otherwise |
VGL_GLLIB |
The location of an alternate OpenGL library Normally, VirtualGL loads the first OpenGL dynamic library that it finds in the dynamic linker path (usually /usr/lib/libGL.so.1 , /usr/lib64/libGL.so.1 , or /usr/lib/64/libGL.so.1 .) You can use this setting to explicitly specify another OpenGL dynamic library to load. Normally, you shouldn’t need to muck with this unless something doesn’t work. However, this setting is necessary when using VirtualGL with Chromium. |
||
VGL_GUI |
Key sequence used to invoke the configuration dialog VirtualGL will normally monitor an application’s X event queue and pop up the VirtualGL configuration dialog whenever CTRL-SHIFT-F9 is pressed. In the event that this interferes with a key sequence that the application is already using, you can redefine the key sequence used to pop up VGL’s configuration dialog by setting VGL_GUI to some combination of shift , ctrl , alt , and one of {f1, f2, ..., f12} . You can also set VGL_GUI to none to disable the configuration dialog altogether. See Section 17 for more details. |
shift-ctrl-f9 | |
VGL_GUI_XTTHREADINIT |
0 to prevent VGL from calling XtToolkitThreadInitialize() Xt & Motif applications are supposed to call XtToolkitThreadInitialize() if they plan to access Xt functions from two or more threads simultaneously. But rarely, a multi-threaded Xt/Motif application may avoid calling XtToolkitThreadInitialize() and rely on the fact that avoiding this call disables application and process locks. This behavior is generally considered errant on the part of the application, but the application developers have probably figured out other ways around the potential instability that this situation creates. The problem arises whenever VirtualGL pops up its configuration dialog (which is written using Xt.) In order to create this dialog, VirtualGL creates a new Xt thread and calls XtToolkitThreadInitialize() as it is supposed to do to guarantee thread safety. But if the application into which VGL is loaded exhibits the errant behavior described above, suddenly enabling application and process locks may cause the application to deadlock. Setting VGL_GUI_XTTHREADINIT to 0 will remove VGL’s call to XtToolkitThreadInitialize() and should thus eliminate the deadlock.In short, if you try to pop up the VirtualGL config dialog and notice that it hangs the application, try setting VGL_GUI_XTTHREADINIT to 0 . |
1 | |
VGL_NPROCS |
-np <# of CPUs> or -np 0 (automatically determine the optimal number of CPUs to use) |
Specify the number of CPUs to use for multi-threaded compression VirtualGL can divide the task of compressing each frame among multiple server CPUs. This might speed up the overall throughput if the compression stage of the pipeline is the primary bottleneck. The default behavior (equivalent to setting VGL_NPROCS=0 ) is to use all but one of the available CPUs, up to a maximum of 3 total. On a large multiprocessor system, the speedup is almost linear up to 3 processors, but the algorithm scales very little past that point. VirtualGL will not allow more than 4 processors total to be used for compression, nor will it allow you to assign more processors than are available in the system. ** This option has no effect in “Raw” Mode. ** |
1P system: 1 2P system: 1 3P system: 2 4P & larger: 3 |
VGL_PORT |
-p <port> |
The TCP port to use when connecting to the client ** This option has no effect in “Raw” Mode. ** |
4242 for unencrypted connections, 4243 for SSL connections |
VGL_PROFILE=0 VGL_PROFILE=1 |
-pr or +pr |
Enable/disable profiling output If enabled, this will cause the VirtualGL faker to continuously benchmark itself and periodically print out the throughput of reading back, compressing, and sending pixels to the client. See Section 16 for more details. |
Profiling disabled |
VGL_QUAL |
-q <1-100> |
An integer between 1 and 100 (inclusive) This setting allows you to specify the quality of the JPEG compression. Lower is faster but also grainier. The default setting should produce perceptually lossless performance. ** This option has no effect in “Raw” Mode. ** |
95 |
VGL_READBACK=0 VGL_READBACK=1 |
Enable/disable readback On rare occasions, it might be desirable to have VirtualGL redirect OpenGL rendering from an application into a Pbuffer but not automatically read back and send the rendered pixels. Some applications have their own mechanisms for reading back the buffer, so disabling VirtualGL’s readback mechanism prevents duplication of effort. This feature was developed initially to support running ParaView in parallel using MPI. ParaView MPI normally uses MPI processes 1 through N as rendering servers, each drawing a portion of the geometry into a separate window running on a separate X display. ParaView reads back these server windows and composites the pixels into the main application window, which is controlled by MPI process 0. By creating a script which passes a different value of VGL_DISPLAY and VGL_READBACK to each MPI process, it is possible to make all of the ParaView server processes render to off-screen buffers on different graphics cards while preventing VirtualGL from displaying any pixels except those generated by process 0. |
Readback enabled | |
VGL_SPOIL=0 VGL_SPOIL=1 |
-sp or +sp |
Enable/disable frame spoiling By default, VirtualGL will drop frames so as not to slow down the rendering rate of the server’s graphics engine. This should produce the best results with interactive applications, but it may be desirable to turn off frame spoiling when running benchmarks or other non-interactive applications. Turning off frame spoiling will force one frame to be read back and sent on each end-of-frame event, so that the frame rate reported by OpenGL benchmarks will accurately reflect the frame rate seen by the user. Disabling frame spoiling also prevents non-interactive applications from wasting graphics resources by rendering frames that will never be seen. With frame spoiling turned off, the rendering pipeline behaves as if it is fill-rate limited to about 30 or 40 Megapixels/second, the maximum throughput of the VirtualGL system on current CPU’s. |
Spoiling enabled |
VGL_SSL=0 VGL_SSL=1 |
-s or +s |
Tunnel the VirtualGL compressed image stream inside a secure socket layer ** This option has no effect in “Raw” Mode. ** |
SSL disabled |
VGL_SUBSAMP |
-samp <411|422|444> |
411, 422, or 444 This allows you to manually specify the level of chrominance subsampling in the JPEG compressor. By default, VirtualGL uses no chrominance subsampling (AKA “4:4:4 subsampling”) when it compresses images for delivery to the client. Subsampling is premised on the fact that the human eye is more sensitive to changes in brightness than to changes in color. Since the JPEG image format uses a colorspace in which brightness (luminance) and color (chrominance) are separated into different channels, one can sample the brightness for every pixel and the color for every other pixel and produce an image which has 16 million colors but uses an average of only 16 bits per pixel instead of 24. This is called “4:2:2 subsampling”, since for every 4 pixels of luminance, there are only 2 pixels of each chrominance component. Likewise, one can sample every fourth chrominance component to produce a 16-million color image with only 12 bits per pixel. The latter is called “4:1:1 subsampling.” Subsampling increases the performance and reduces the network usage, since there is less data to move around, but it can produce some visible artifacts. Subsampling artifacts are rarely observed with volume data, since it usually only contains 256 colors to begin with. But narrow, aliased lines and other sharp features on a black background will tend to produce subsampling artifacts. The Axis Indicator from a Popular Visualization App displayed with 4:4:4, 4:2:2, and 4:1:1 subsampling (respectively): NOTE: If you select 4:1:1 subsampling, VirtualGL will in fact try to use 4:2:0 instead. 4:2:0 samples every other pixel both horizontally and vertically rather than sampling every fourth pixel horizontally. But not all JPEG codecs support 4:2:0, so 4:1:1 is used when 4:2:0 is not available. ** This option has no effect in “Raw” Mode. ** |
444 |
VGL_SYNC=0 VGL_SYNC=1 |
-sync or +sync |
Enable/disable strict 2D/3D synchronization (necessary to pass GLX conformance tests) Normally, VirtualGL’s operation is asynchronous from the point of view of the application. The application swaps the buffers or calls glFinish() or glFlush() or glXWaitGL() , and VirtualGL reads back the framebuffer and sends the pixels to the client’s display … eventually. This will work fine for the vast majority of applications, but it is not strictly conformant. Technically speaking, when an application calls glXWaitGL() or glFinish() , it is well within its rights to expect the OpenGL-rendered pixels to be immediately available in the X window. Fortunately, very few applications actually do expect this, but on rare occasions, an application may try to use XGetImage() or other X11 functions to obtain a bitmap of the pixels that were rendered by OpenGL. Enabling VGL_SYNC is a somewhat extreme measure that may be needed to get such applications to work properly. It was developed primarily as a way to pass the GLX conformance suite (conformx , specifically.) When VGL_SYNC is enabled, every call to glFinish() or glXWaitGL() will cause the contents of the server’s framebuffer to be read back and synchronously drawn into the client’s window without compression or frame spoiling. The call to glFinish() or glXWaitGL() will not return until VirtualGL has verified that the pixels have been delivered into the client’s window. As such, enabling this mode can have potentially dire effects on performance. |
Synchronization disabled |
VGL_TILESIZE |
A number between 8 and 1024 (inclusive) Normally, in Direct Mode, VirtualGL will divide an OpenGL window into tiles of 256x256 pixels, compare each tile vs. the previous frame, and only compress & send the tiles which have changed. It will also divide up the task of compressing these tiles among the available CPUs in a round robin fashion, if multi-threaded compression is enabled. There are several tradeoffs that must be considered when choosing a tile size: Smaller tile sizes:
Smaller tiles can more easily be divided up among multiple CPUs, but they compress less efficiently (and less quickly) on an individual basis. Using larger tiles can reduce traffic to the client by allowing the server to send only one frame update instead of many. But on the flip side, using larger tiles decreases the chance that a tile will be unchanged from the previous frame. Thus, the server may only send one or two packets per frame, but the cumulative size of those packets may be much larger than if a smaller tile size was used. 256x256 was chosen as the default because, in experiments, it provided the best balance between scalability and efficiency on the platforms that VirtualGL supports. ** This option has no effect in “Raw” Mode. ** |
256 | |
VGL_TRACE=0 VGL_TRACE=1 |
-tr or +tr |
Enable/disable tracing When tracing is enabled, VirtualGL will log all calls to the GLX and X11 functions it is intercepting, as well as the arguments, return values, and execution times for those functions. This is useful when diagnosing interaction problems between VirtualGL and a particular OpenGL application. |
Tracing disabled |
VGL_VERBOSE=0 VGL_VERBOSE=1 |
-v or +v |
Enable/disable verbosity When in verbose mode, VirtualGL will reveal some of the decisions it makes behind the scenes, such as which code path it is using to compress JPEG images, which type of X11 drawing it is using, etc. This can be helpful when diagnosing performance problems. |
Verbosity disabled |
VGL_X11LIB |
the location of an alternate X11 library Normally, VirtualGL loads the first X11 dynamic library that it finds in the dynamic linker path (usually /usr/lib/libX11.so.? , /usr/lib/64/libX11.so.? , /usr/X11R6/lib/libX11.so.? , or /usr/X11R6/lib64/libX11.so.? .) You can use this setting to explicitly specify another X11 dynamic library to load. Normally, you shouldn’t need to muck with this unless something doesn’t work. |
||
VGL_XVENDOR |
Return a fake X11 vendor string when the application calls XServerVendor() Some applications expect XServerVendor() to return a particular value, which the application (sometimes erroneously) uses to figure out whether it’s running locally or remotely. This setting allows you to fool such applications into thinking they’re running on a “local” X server rather than a remote connection. |
Environment Variable Name | Description | Default Value |
---|---|---|
VGL_PROFILE=0 VGL_PROFILE=1 |
Enable/disable profiling output If enabled, this will cause the VirtualGL client to continuously benchmark itself and periodically print out the throughput of decompressing and drawing pixels into the application window. See Section 16 for more details. |
Profiling disabled |
VGL_VERBOSE=0 VGL_VERBOSE=1 |
Enable/disable verbosity When in verbose mode, VirtualGL will reveal some of the decisions it makes behind the scenes, such as which code path it is using to decompress JPEG images, which type of X11 drawing it is using, etc. This can be helpful when diagnosing performance problems. |
Verbosity disabled |
vglclient
Command-Line Argumentsvglclient Argument |
Description | Default |
---|---|---|
-port <port number> |
Causes the client to listen for unencrypted connections on the specified TCP port | 4242 |
-sslport <port number> |
Causes the client to listen for SSL connections on the specified TCP port | 4243 |
-sslonly |
Causes the client to reject all unencrypted connections | Accept both SSL and unencrypted connections |
-nossl |
Causes the client to reject all SSL connections | Accept both SSL and unencrypted connections |
-l <log file> |
Redirect all output from the client to the specified file | Output goes to stderr |
-x |
Use X11 functions to draw pixels into the application window | Use OpenGL on Solaris/Sparc or if stereo is enabled; use X11 otherwise |
-gl |
Use OpenGL functions to draw pixels into the application window | Use OpenGL on Solaris/Sparc or if stereo is enabled; use X11 otherwise |