User’s Guide for VirtualGL 2.0.1 and TurboVNC 0.3.3
Intended audience: System Administrators, Graphics Programmers, Researchers, and others with knowledge of the Linux or Solaris operating systems, OpenGL and GLX, and X windows.
This document and all associated illustrations are licensed under the Creative Commons Attribution 2.5 License. Any works which contain material derived from this document must cite The VirtualGL Project as the source of the material and list the current URL for the VirtualGL web-site.
This product includes software developed by the OpenSSL Project for
use in the OpenSSL Toolkit (http://www.openssl.org/.)
Further information is contained in
LICENSE-OpenSSL.txt
,
which can be found in the same directory as this documentation.
VirtualGL is licensed under the wxWindows Library License, v3, a derivative of the GNU Lesser General Public License (LGPL).
VirtualGL is an open source package which gives any Unix or Linux remote display software the ability to run OpenGL applications with full 3D hardware acceleration. Some remote display software, such as VNC, lacks the ability to run OpenGL applications entirely. Other remote display software forces OpenGL applications to use a slow software-only OpenGL renderer, to the detriment of both performance as well as compatibility. And running OpenGL applications using the traditional remote X-Windows approach causes all of the OpenGL commands and 3D data to be sent over the network to be rendered on the client machine, which is not a tenable proposition unless the data is relatively small and static, unless the network is fast, and unless the OpenGL application is specifically tuned for a remote X-Windows environment.
With VirtualGL, the OpenGL commands and 3D data are instead redirected to a 3D graphics accelerator on the server machine, and only the rendered 3D images are sent to the client machine. VirtualGL thus “virtualizes” 3D graphics hardware, allowing it to be co-located in the “cold room” with compute and storage resources. VirtualGL also allows 3D graphics hardware to be shared among multiple users, and it provides real-time performance on even the most modest of networks. This makes it possible for large, noisy, hot 3D workstations to be replaced with laptops or even thinner clients, but more importantly, it eliminates the workstation and the network as barriers to data size. Users can now visualize gigabytes and gigabytes of data in real time without needing to cache any of the data locally or sit in front of the machine that is rendering the data.
VirtualGL has two basic modes of operation: “Direct” Mode and “Raw” Mode. In both modes, a separate X-Windows server (or X-Windows proxy) is used to display the application’s GUI and to provide keyboard/mouse interaction.
Server (x86) | Server (x86-64) | Client | |
---|---|---|---|
Recommended CPU | Pentium 4, 1.7 GHz or faster (or equivalent)
|
Pentium 4/Xeon with EM64T, or… AMD Opteron or Athlon64, 1.8 GHz or faster
|
Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent) |
Graphics | Any decent 3D graphics card that supports Pbuffers
|
Any graphics card with decent 2D performance
|
|
Recommended O/S | |||
Other Software | X server configured to export True Color (24-bit or 32-bit) visuals |
VirtualGL should build and run on Itanium Linux, but it has not been thoroughly tested. Contact us if you encounter any difficulties.
Server | Client | ||
---|---|---|---|
Recommended CPU | Pentium 4/Xeon with EM64T, or… AMD Opteron or Athlon64, 1.8 GHz or faster
|
Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent) | |
Graphics | nVidia 3D graphics card | Any graphics card with decent 2D performance | |
O/S | Solaris 10 or higher | ||
Other Software |
|
|
* Solaris 10/x86 comes with mediaLib pre-installed, but it is strongly recommended that you upgrade this version of mediaLib to at least 2.4. This will greatly increase the performance of Solaris/x86 VirtualGL clients as well as the performance of 32-bit applications on Solaris/x86 VirtualGL servers.
Server | Client | |
---|---|---|
Recommended CPU | UltraSPARC III 900 MHz or faster
|
UltraSPARC III 900 MHz or faster |
Graphics | Any decent 3D graphics card that supports Pbuffers | Any graphics card with decent 2D performance |
O/S | Solaris 8 or higher | |
Other Software |
|
|
Client | |
---|---|
Recommended CPU | Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent) |
Graphics | Any graphics card with decent 2D performance |
O/S | Windows 2000 or later |
Other Software |
|
Server | Client | |
---|---|---|
Linux | 3D graphics card that supports stereo (example: nVidia Quadro) and is configured to export stereo visuals | |
Solaris/x86 | ||
Solaris/Sparc |
|
|
Windows | N/A |
|
Client | |
---|---|
Linux | 3D graphics card that supports transparent overlays (example: nVidia Quadro) and is configured to export overlay visuals |
Solaris/x86 | |
Solaris/Sparc |
|
Windows |
|
VirtualGL must be installed on any machine that will act as a VirtualGL server or as a VirtualGL Direct Mode client. It is not necessary to install VirtualGL on the client machine if Raw Mode is to be used.
turbojpeg-{version}.i386.rpm
for 32-bit systems and turbojpeg-{version}.x86_64.rpm
for 64-bit systems) from the
files
area of the
VirtualGL
SourceForge web-site.
The 64-bit RPM provides both 32-bit and 64-bit TurboJPEG libraries.
.tgz packages are provided for users of non-RPM-based Linux distributions. You can use alien to convert these into .deb packages if you prefer.
rpm -U turbojpeg*.rpm
VirtualGL-{version}.i386.rpm
for 32-bit systems and VirtualGL-{version}.x86_64.rpm
for 64-bit systems) from the
files
area of the
VirtualGL
SourceForge web-site. The 64-bit RPM provides both 32-bit and 64-bit VirtualGL components.
rpm -e VirtualGL rpm -i VirtualGL*.rpm
SUNWvgl-{version}.pkg.bz2
for Sparc and SUNWvgl-{version}-x86.pkg.bz2
for x86) from
the
files
area of the
VirtualGL
SourceForge web-site. Both packages provide both 32-bit and 64-bit VirtualGL components.
pkgrm SUNWvgl(answer “Y” when prompted.)
bzip2 -d SUNWvgl-{version}.pkg.bz2 pkgadd -d SUNWvgl-{version}.pkgSelect the
SUNWvgl
package (usually option 1) from the
menu.
VirtualGL for Solaris installs into /opt/SUNWvgl
.
VirtualGL-{version}.exe
)
from the
files
area of the
VirtualGL
SourceForge web-site.
If you are using a non-RPM-based distribution of Linux or another platform
for which there is not a pre-built VirtualGL binary package available,
then log in as root, download the VirtualGL source tarball (VirtualGL-{version}.tar.gz
)
from the
files
area of the
VirtualGL
SourceForge web-site, uncompress it,
cd vgl
, and read the contents of BUILDING.txt
for further instructions on how to build and install VirtualGL from
source.
As root, issue the following command:
rpm -e VirtualGL
As root, issue the following command:
pkgrm SUNWvgl
Answer “yes” when prompted.
Use the Add or Remove Programs applet in the Control Panel.
TurboVNC must be installed on any machine that will act as a TurboVNC server or client. It is not necessary to install TurboVNC to use VirtualGL in Direct Mode. Also, TurboVNC need not necessarily be installed on the same server as VirtualGL.
turbojpeg-{version}.i386.rpm
for 32-bit systems and turbojpeg-{version}.x86_64.rpm
for 64-bit systems) from the
files
area of the
VirtualGL
SourceForge web-site.
The 64-bit RPM provides both 32-bit and 64-bit TurboJPEG libraries.
.tgz packages are provided for users of non-RPM-based Linux distributions. You can use alien to convert these into .deb packages if you prefer.
rpm -U turbojpeg*.rpm
turbovnc-{version}.i386.rpm
)
from the
files
area of the
VirtualGL
SourceForge web-site. rpm -U turbovnc*.rpm
SUNWtvnc-{version}.pkg.bz2
for Sparc and SUNWtvnc-{version}-x86.pkg.bz2
for x86)
from the
files
area of the
VirtualGL
SourceForge web-site. pkgrm SUNWvgl(answer “Y” when prompted.)
bzip2 -d SUNWtvnc-{version}.pkg.bz2 pkgadd -d SUNWtvnc-{version}.pkgSelect the
SUNWtvnc
package (usually option 1) from the
menu.
TurboVNC for Solaris installs into /opt/SUNWtvnc
.
TurboVNC-{version}.exe
)
from the
files
area of the
VirtualGL
SourceForge web-site.
If you are using a non-RPM-based distribution of Linux or another platform
for which there is not a pre-built TurboVNC binary package available,
then log in as root, download the TurboVNC source tarball (turbovnc-{version}.tar.gz
)
from the
files
area of the
VirtualGL
SourceForge web-site, uncompress it,
cd vnc/vnc_unixsrc
, and read the contents of BUILDING.txt
for further instructions on how to build and install TurboVNC from
source.
As root, issue the following command:
rpm -e turbovnc
As root, issue the following command:
pkgrm SUNWtvnc
Answer “yes” when prompted.
Use the Add or Remove Programs applet in the Control Panel.
VirtualGL requires access to the server’s 3D graphics card so that it can create off-screen pixel buffers (Pbuffers) and redirect the 3D rendering from applications into these Pbuffers. Unfortunately, accessing a 3D graphics card on Linux currently requires going through an X server. So the only way to share the server’s 3D graphics card among multiple users is to grant those users access to the X server that is running on the 3D graphics card.
It is important to understand the security risks associated with this.
Once X display access is granted to a user, there is nothing that would
prevent that user from logging keystrokes or reading back images from
the X display. Using xauth
, one can obtain “untrusted”
X authentication keys which prevent such exploits, but unfortunately,
those untrusted keys also disallow access to the 3D hardware. So it
is necessary to grant full trusted X access to any users that will
need to run VirtualGL. Unless you fully trust the users to whom you
are granting this access, you should avoid logging in locally to the
server’s X display as root unless absolutely necessary.
This section will explain how to configure a VirtualGL server such
that select users can run VirtualGL, even if the server is sitting
at the login prompt. The basic idea is to call a script (vglgenkey
)
from the display manager’s startup script. vglgenkey
invokes xauth
to generate an authorization key for the
server’s X display, and it stores this key under /etc/opt/VirtualGL
.
The VirtualGL launcher script (vglrun
) then attempts to
read this key and merge it into the user’s .Xauthority
file, thus granting the user access to the server’s X display.
Therefore, you can control who has access to the server’s X display
simply by controlling who has read access to the /etc/opt/VirtualGL
directory.
If you prefer, you can also grant access to every authenticated user
on the server by replacing the references to vglgenkey
below with xhost +localhost
.
init 3as root
vglusers
and add any users that
need to run VirtualGL to this group.
/etc/opt/VirtualGL
and make it
readable by the vglusers
group. For example: mkdir -p /etc/opt/VirtualGL chgrp vglusers /etc/opt/VirtualGL chmod 750 /etc/opt/VirtualGL
/etc/inittab
fromid:3:initdefault:
id:5:initdefault:
vglgenkeyat the top of the display manager’s startup script. The location of this script varies depending on the particular Linux distribution and display manager being used. The following table lists some common locations for this file:
xdm or kdm | gdm (default display manager on most Linux systems) |
|
---|---|---|
RedHat 7/8/9 Enterprise Linux 2.1/3 |
/etc/X11/xdm/Xsetup_0 (replace “0” with the display number of the X server you are configuring) |
/etc/X11/gdm/Init/Default (usually this is just symlinked to /etc/X11/xdm/Xsetup_0 ) |
Enterprise Linux 4 Fedora 1-4 |
/etc/X11/xdm/Xsetup_0 (replace “0” with the display number of the X server you are configuring) |
/etc/X11/gdm/Init/:0 (usually this is just symlinked to /etc/X11/xdm/Xsetup_0 ) |
Enterprise Linux 5 Fedora 5 & 6 |
/etc/X11/xdm/Xsetup_0 (replace “0” with the display number of the X server you are configuring) |
/etc/gdm/Init/Default |
SuSE/United Linux | /etc/X11/xdm/Xsetup |
/etc/opt/gnome/gdm/Init/Default |
gdm.conf
file and add the
following line under the [security]
section (or change
it if it already exists):
DisallowTCP=falseSee the table below for the location of gdm.conf on various systems.
-tst
on the command line used to launch the X server. The location of this
command line varies depending on the particular Linux distribution
and display manager being used. The following table lists some common
locations:xdm | gdm (default on most Linux systems) |
kdm | |
---|---|---|---|
RedHat 7/8/9 Enterprise Linux 2.1/3/4 Fedora 1-4 |
/etc/X11/xdm/Xservers |
/etc/X11/gdm/gdm.conf |
/etc/X11/xdm/Xservers |
Enterprise Linux 5 Fedora 5 & 6 |
/etc/X11/xdm/Xservers |
/etc/gdm/custom.conf |
/etc/X11/xdm/Xservers |
SuSE/United Linux | /etc/X11/xdm/Xservers |
/etc/opt/gnome/gdm/gdm.conf |
/etc/opt/kde3/share/config/kdm/Xservers |
-tst
to
the line corresponding to the display number you are configuring.
For example: :0 local /usr/X11R6/bin/X :0 vt07 -tstFor gdm-style configuration files, add
-tst
to all lines
that appear to be X server command lines. For example: StandardXServer=/usr/X11R6/bin/X -tst
[server-Standard] command=/usr/X11R6/bin/X -tst -audit 0
[server-Terminal] command=/usr/X11R6/bin/X -tst -audit 0 -terminate
[server-Chooser] command=/usr/X11R6/bin/X -tst -audit 0
init 5as root.
xauth merge /etc/opt/VirtualGL/vgl_xauth_key xdpyinfo -display :0In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above.
If you are installing VirtualGL on a server which is running version
1.0-71xx or earlier of the NVidia accelerated GLX drivers, follow the
instructions in /usr/share/doc/NVIDIA_GLX-1.0/README
regarding
setting the appropriate permissions for /dev/nvidia*
.
This is not necessary with more recent versions of the driver. cat /proc/driver/nvidia/version
to determine which version of the NVidia driver is installed on your
system.
Sun’s OpenGL library for Sparc systems has a special extension called “GLP” which allows VirtualGL to directly access a 3D graphics card even if there is no X server running on the card. Apart from greatly simplifying the process of configuring the VirtualGL server, GLP also greatly improves the overall security of the VirtualGL server, since it eliminates the need to grant X server access to VirtualGL users. In addition, GLP makes it easy to assign VirtualGL jobs to any graphics card in a multi-card system.
If your system is running Sun OpenGL 1.5 for Sparc/Solaris, it is recommended that you configure it to use GLP:
vglusers
and add any users that
need to run VirtualGL to this group.
/etc/dt/config
directory does not exist, create
it.
mkdir -p /etc/dt/configMake sure that
/etc/dt/config
has global read/execute
permissions.
GraphicsDevices
under /etc/dt/config
and add any framebuffer device paths in your system (/dev/fbs/kfb0
,
/dev/fbs/jfb0
, etc.) to this file, one device per line.
For example: touch /etc/dt/config/GraphicsDevices for i in /dev/fbs/*[0-9]; do echo $i >>/etc/dt/config/GraphicsDevices; doneYou can choose to include only certain framebuffer devices in this file. Only the devices listed in
GraphicsDevices
will
be available for use by VirtualGL.
vglusers
group.
For example: chgrp vglusers /etc/dt/config/GraphicsDevices chmod 640 /etc/dt/config/GraphicsDevices
If you wish to make GLP the default for all users of the system, you can add
VGL_DISPLAY=glp export VGL_DISPLAY
to /etc/profile
. This will cause VirtualGL to use the
first device specified in /etc/dt/config/GraphicsDevices
as the default rendering device. Users can override this default by
setting VGL_DISPLAY
in one of their startup scripts (such
as ~/.profile
or ~/.login
) or by passing
an argument of -d <device>
to vglrun
when invoking VirtualGL. See Chapter
19 for more details.
If you plan to use VirtualGL only with GLP, then you can skip this section.
VirtualGL requires access to the server’s 3D graphics card so that it can create off-screen pixel buffers (Pbuffers) and redirect the 3D rendering from applications into these Pbuffers. Unfortunately, accessing a 3D graphics card on Solaris/x86 systems or on Solaris/Sparc systems without GLP requires going through an X server. On such systems, the only way to share the server’s 3D graphics card among multiple users is to grant those users access to the X server that is running on the 3D graphics card.
It is important to understand the security risks associated with this.
Once X display access is granted to a user, there is nothing that would
prevent that user from logging keystrokes or reading back images from
the X display. Using xauth
, one can obtain “untrusted”
X authentication keys which prevent such exploits, but unfortunately,
those untrusted keys also disallow access to the 3D hardware. So it
is necessary to grant full trusted X access to any users that will
need to run VirtualGL. Unless you fully trust the users to whom you
are granting this access, you should avoid logging in locally to the
server’s X display as root unless absolutely necessary.
This section will explain how to configure a VirtualGL server such
that select users can run VirtualGL, even if the server is sitting
at the login prompt. The basic idea is to call a script (vglgenkey
)
from the display manager’s startup script. vglgenkey
invokes xauth
to generate an authorization key for the
server’s X display, and it stores this key under /etc/opt/VirtualGL
.
The VirtualGL launcher script (vglrun
) then attempts to
read this key and merge it into the user’s .Xauthority
file, thus granting the user access to the server’s X display.
Therefore, you can control who has access to the server’s X display
simply by controlling who has read access to the /etc/opt/VirtualGL
directory.
If you prefer, you can also grant access to every authenticated user
on the server by replacing the references to vglgenkey
below with xhost +localhost
.
vglusers
and add any users that
need to run VirtualGL to this group.
/etc/opt/VirtualGL
and make it
readable by the vglusers
group. For example: mkdir -p /etc/opt/VirtualGL chgrp vglusers /etc/opt/VirtualGL chmod 750 /etc/opt/VirtualGL
/etc/dt/config
directory does not exist, create
it.
mkdir -p /etc/dt/config
/etc/dt/config/Xsetup
does not exist, then copy the
default Xsetup
file from /usr/dt/config
to
that location:
cp /usr/dt/config/Xsetup /etc/dt/config/Xsetup
/etc/dt/config/Xsetup
, and add the following lines
to the bottom of the file:
/opt/SUNWvgl/bin/vglgenkey
/etc/dt/config/Xconfig
does not exist, then copy the
default Xconfig
file from /usr/dt/config
to that location:
cp /usr/dt/config/Xconfig /etc/dt/config/Xconfig
/etc/dt/config/Xconfig
, and add (or uncomment) the
following line:
Dtlogin*grabServer: False
Dtlogin*grabServer
option restricts X display access
to only the dtlogin
process. This is an added security
measure, since it prevents a user from attaching any kind of sniffer
program to the X display even if they have display access. But Dtlogin*grabServer
also prevents VirtualGL from using the X display to access the 3D graphics
hardware, so this option must be disabled for VirtualGL to work properly.
If the system you are configuring as a VirtualGL server is also being used as a Sun Ray server, then make these same modifications to /etc/dt/config/Xconfig.SUNWut.prototype
. Otherwise, the modifications you just made to /etc/dt/config/Xconfig
will be overwritten the next time the system is restarted.
/etc/dt/config/Xservers
does not exist, then copy the
default Xservers
file from /usr/dt/config
to that location:
cp /usr/dt/config/Xservers /etc/dt/config/Xservers
/etc/dt/config/Xservers
and add an argument of -tst
to the line corresponding to the display number you are configuring.
For example: :0 Local local_uid@console root /usr/openwin/bin/Xsun :0 -nobanner -tst
If the system you are configuring as a VirtualGL server is also being used as a Sun Ray server, then make these same modifications to /etc/dt/config/Xservers.SUNWut.prototype
. Otherwise, the modifications you just made to /etc/dt/config/Xservers
will be overwritten the next time the system is restarted.
/etc/dt/config
and /etc/dt/config/Xsetup
can be executed by all users, and verify that /etc/dt/config/Xconfig
and /etc/dt/config/Xservers
can be read by all users.
/etc/init.d/dtlogin stop; /etc/init.d/dtlogin start
/usr/openwin/bin/xauth merge /etc/opt/VirtualGL/vgl_xauth_key /usr/openwin/bin/xdpyinfo -display :0In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above.
vglusers
and add any users that
need to run VirtualGL to this group.
/etc/opt/VirtualGL
and make it
readable by the vglusers
group. For example: mkdir -p /etc/opt/VirtualGL chgrp vglusers /etc/opt/VirtualGL chmod 750 /etc/opt/VirtualGL
/opt/SUNWvgl/bin/vglgenkeyto the top of the
/etc/X11/gdm/Init/Default
file.
/etc/X11/gdm/gdm.conf
and add the following line
under the [security]
section (or change it if it already
exists):
DisallowTCP=false
/etc/X11/gdm/gdm.conf
and add -tst
to all lines that appear to be X server command lines. For example:
StandardXServer=/usr/X11R6/bin/Xorg -tst
[server-Standard] command=/usr/X11R6/bin/Xorg -tst -audit 0
[server-Terminal] command=/usr/X11R6/bin/Xorg -tst -audit 0 -terminate
[server-Chooser] command=/usr/X11R6/bin/Xorg -tst -audit 0
svcadm disable gdm2-login; svcadm enable gdm2-login
/usr/openwin/bin/xauth merge /etc/opt/VirtualGL/vgl_xauth_key /usr/openwin/bin/xdpyinfo -display :0In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above.
Whether the server’s 3D graphics card is being accessed through GLP or through an X server, you must perform the following procedure to enable VirtualGL users to access the framebuffer device(s):
/etc/logindevperm
and comment out the “frame
buffers” line. For example: # /dev/console 0600 /dev/fbs/* # frame buffers
/dev/fbs/*
to allow
write access to anyone who will need to use VirtualGL. For example:
chmod 660 /dev/fbs/* chown root /dev/fbs/* chgrp vglusers /dev/fbs/*
Explanation: Normally, when someone logs into a Solaris machine, the system will automatically assign ownership of the framebuffer devices to that user and set the permissions for the framebuffer devices to those specified in /etc/logindevperm
. The default setting in /etc/logindevperm
disallows anyone from using the framebuffer devices except the user that is logged in. But in order to run VirtualGL, a user needs write access to the framebuffer devices. So in order to make the framebuffer a shared resource, it is necessary to disable the login device permissions mechanism for the framebuffer devices and manually set the owner and group for these devices such that any VirtualGL users can write to them.
Note that the framebuffer device permissions control not only remote execution of OpenGL applications but also local execution of OpenGL applications. If it is necessary for users outside of the vglusers
group to run OpenGL applications on the VirtualGL server, then set the permissions on /dev/fbs/*
to 666
rather than 660
.
The server’s SSh daemon should have the X11Forwarding
option enabled and the UseLogin
option disabled. This
is configured in sshd_config
, the location of which varies
depending on your distribution of SSh. Solaris 10 generally keeps
this in /etc/ssh
, whereas Blastwave keeps it in /opt/csw/etc
and SunFreeware keeps it in /usr/local/etc
.
C:\Program Files\Hummingbird\Connectivity\9.00\Exceed
)
to the system PATH
environment if it isn’t already
there.
ssh.exe
or putty.exe
should be somewhere
in your PATH
.
If you are using the “Classic View” mode of XConfig, open the “Protocol” applet instead.
If you are using the “Classic View” mode of XConfig, open the “Performance” applet instead.
VirtualGL has the ability to take advantage of the MIT-SHM extension in Hummingbird Exceed to accelerate image drawing on Windows. This can improve the overall performance of the VirtualGL pipeline by as much as 20% in some cases.
The bad news is that this extension has some issues in earlier versions of Exceed. If you are using Exceed 8 or 9, you will need to obtain the following patches from the Hummingbird support site:
Product | Patches Required | How to Obtain |
---|---|---|
Hummingbird Exceed 8.0 | hclshm.dll v9.0.0.1 (or higher)xlib.dll v9.0.0.3 (or higher)exceed.exe v8.0.0.28 (or higher) |
Download all patches from the Hummingbird support site. (Hummingbird WebSupport account required) |
Hummingbird Exceed 9.0 | hclshm.dll v9.0.0.1 (or higher)xlib.dll v9.0.0.3 (or higher)exceed.exe v9.0.0.9 (or higher) |
exceed.exe can be patched by running Hummingbird Update.All other patches must be downloaded from the Hummingbird support site. (Hummingbird WebSupport account required) |
No patches should be necessary for Exceed 10 and above.
Next, you need to enable the MIT-SHM extension in Exceed:
If you are using the “Classic View” mode of XConfig, open the “Protocol” applet instead.
The VirtualGL Windows Client can be installed as a Windows service (and subsequently removed) using the links provided in the “VirtualGL Client” start menu group. Once installed, the service can be started from the Services applet in the Control Panel (located under “Administrative Tools”) or by invoking
net start vglclient
from a command prompt. The service can be subsequently stopped by invoking
net stop vglclient
If you wish to install the client as a service and have it listen on a port other than the default (4242 for unencrypted connections or 4243 for SSL connections), then you will need to install the service manually from the command line.
vglclient -?
gives a list of the relevant command-line options.
Optimal
X11 traffic is encrypted, but the VirtualGL image stream is left unencrypted to maximize performance.
vglclient
/opt/SUNWvgl/bin/vglclient
echo $DISPLAYand make a note of the value.
set DISPLAY localhost:{n}.0Replace
{n}
with the display number that Exceed is occupying.
To obtain this, hover over the Exceed icon in the taskbar and make
a note of the value it displays (usually :0.0
, unless
you have multiple Exceed sessions running.)
ssh -X {user}@{server}Replace
{user}
with your user account name on the VirtualGL
server and {server}
with the hostname or IP address of
that server.
If using PuTTY, replace ssh
with putty
in the above example.
VGL_CLIENT
environment variable on the VirtualGL server
to point to the client’s X display: export VGL_CLIENT={client}:{n}.0or
setenv VGL_CLIENT {client}:{n}.0Replace
{client}
with the hostname or IP address of your
client machine (echo $SSH_CLIENT
if you don’t
know this) and {n}
with the display number of the client
machine’s X display (obtained in Step 4.)
vglrun [vglrun options] {application_executable_or_script} {arguments}
/opt/SUNWvgl/bin/vglrun [vglrun options] {application_executable_or_script} {arguments}
vglrun
command line options.
Optimal
vglclient
/opt/SUNWvgl/bin/vglclient
echo $DISPLAYand make a note of the value.
:0.0
, unless
you have multiple Exceed sessions running.)
xhost +{server}Replace
{server}
with the hostname or IP address of the
VirtualGL server.
xhost.txt
in Notepad.
ssh {user}@{server}Replace
{user}
with your user account name on the VirtualGL
server and {server}
with the hostname or IP address of
that server.
If using PuTTY, replace ssh
with putty
in the above example.
DISPLAY
environment variable on the VirtualGL server
to point to the client’s X display: export DISPLAY={client}:{n}.0or
setenv DISPLAY {client}:{n}.0Replace
{client}
with the hostname or IP address of your
client machine (echo $SSH_CLIENT
if you don’t
know this) and {n}
with the display number of the client
machine’s X display (obtained in Step 4.)
vglrun [vglrun options] {application_executable_or_script} {arguments}
/opt/SUNWvgl/bin/vglrun [vglrun options] {application_executable_or_script} {arguments}
vglrun
command line options.
On high-speed networks such as Ethernet, VirtualGL’s performance is reduced by as much as 20% by enabling SSL encryption.
Pass an argument of +s
to vglrun
when launching
VirtualGL, or set the environment variable VGL_SSL
to
1
on the VirtualGL server. (see Chapter
19 for more details.)
The procedure is the same as for the X11 Forwarding case, except that the following additional steps need to be taken:
/opt/VirtualGL/bin/nettest -findport
/opt/SUNWvgl/bin/nettest -findport
ssh -X -R {port}:localhost:4242 {user}@{server}Replace
{port}
with the port number you obtained in Step
1.
If you are using an OpenSSH client, you can also type the following key sequence: <ENTER> ~ C
(that’s the Enter key, followed by a tilde, followed by a capital C), which will bring up an ssh>
prompt at which you can enter -R {port}:localhost:4242
. This allows you to set up the tunnel without closing and re-opening the SSh session.
VGL_PORT
environment variable to match the port number you obtained above.
VGL_CLIENT
environment variable on the VirtualGL
server to localhost:{n}.0
, where {n}
is the
display number of the X server running on the client machine.
Explanation: When you established the SSh connection using the -R
argument, it created a listener on the VirtualGL server. That listener will accept a connection from VirtualGL and forward the connection over the SSh tunnel to port 4242 on the client machine. Thus, you need to set VGL_PORT
and VGL_CLIENT
on the VirtualGL server to tell VirtualGL to make a connection to the SSh listener rather than the “real” VirtualGL client program.
Referring to Chapter 2, Raw Mode is a mode in which VirtualGL bypasses its internal image compressor and instead sends the rendered 3D images to an X server as uncompressed bitmaps. Raw Mode is designed to be used with an “X Proxy”, which is a virtual X server that intercepts X-Windows commands from an application, renders them into images, compresses the images, and sends them over the network to a client.
Thus, in Raw Mode, VirtualGL relies on the X proxy to compress the rendered 3D images, and since VirtualGL is sending those images to the X proxy at a very fast rate, the proxy must be able to compress the images very quickly in order to keep up. But, unfortunately, most X proxies can’t. They simply aren’t designed for the types of full-screen video workloads that VirtualGL generates. Therefore, the VirtualGL Project provides an optimized X proxy known as TurboVNC, which is based on the Virtual Network Computing (VNC) standard (more specifically, on the TightVNC variant thereof.)
On the surface, TurboVNC behaves very similarly to its parent project, but TurboVNC has been tuned to provide interactive performance for the types of full-screen video workloads that VirtualGL produces. On these types of image workloads, TurboVNC performs as much as an order of magnitude faster than TightVNC, uses more than an order of magnitude less CPU time to compress each frame, and it produces comparable compression ratios. Part of this speedup comes from the use of TurboJPEG, the same high-speed vector-optimized JPEG codec used by VirtualGL. Another large part of the speedup comes from bypassing the color compression features of TightVNC. TightVNC performs very CPU-intensive analysis on each image tile to determine whether the tile will compress better using color compression or JPEG. But for the types of images that a 3D application generates, it is almost never the case that color compression compresses better than JPEG, so TurboVNC bypasses this analysis to improve performance. TurboVNC also has the ability to hide network latency by decompressing and drawing a frame on the client while the next frame is being fetched from the server, thus improving performance dramatically on high-latency connections. TurboVNC additionally provides client-side double buffering, full support for Solaris, and other tweaks.
There are several reasons why one might prefer to use Raw Mode + TurboVNC over Direct Mode (and several reasons why one might not.)
ssh {user}@{server}Replace
{user}
with your user account name on the VirtualGL
server and {server}
with the hostname or IP address of
that server.
If using PuTTY, replace ssh
with putty
in the above example.
/opt/TurboVNC/bin/vncserver
/opt/SUNWvgl/bin/vncserver
New 'X' desktop is my_server:1
/opt/TurboVNC/bin/vncviewer
/opt/SUNWvgl/bin/vncviewer
Windows TurboVNC viewer | Linux/Solaris TurboVNC viewer |
---|---|
Windows TurboVNC viewer | Linux/Solaris TurboVNC viewer |
---|---|
vglrun [vglrun options] {application_executable_or_script} {arguments}
/opt/SUNWvgl/bin/vglrun [vglrun options] {application_executable_or_script} {arguments}
vglrun
command line options.
If TurboVNC and VirtualGL are running on different servers, then it is desirable to use Raw Mode to send images from the VirtualGL server to the TurboVNC server. Otherwise, the images would have to be compressed by the VirtualGL server, decompressed by the VirtualGL client, then recompressed by the TurboVNC server, which is a waste of CPU resources. However, sending images uncompressed over a network requires a fast network (generally, Gigabit Ethernet or faster.) So there needs to be a fast link between the VirtualGL server and the TurboVNC server for this procedure to perform well.
The procedure for using Raw Mode to transmit images from a VirtualGL server to a TurboVNC server is essentially the same as the procedure for using Direct Mode with a Direct X11 Connection – with the following notable differences:
VGL_COMPRESS
to 0
or pass an argument of -c 0
to vglrun
when launching VirtualGL. Otherwise, VirtualGL will detect that the
connection to the X server is remote, and it will automatically try
to enable Direct Mode. Setting VGL_COMPRESS
to 0
forces the use of Raw Mode, regardless of whether the X server is local
or remote.
Closing the TurboVNC viewer disconnects from the TurboVNC server session, but the TurboVNC server session (and any applications that you may have started in it) is still running on the server machine, and you can reconnect to it at any time.
To kill a TurboVNC server session:
/opt/TurboVNC/bin/vncserver -kill :{n}
/opt/SUNWtvnc/bin/vncserver -kill :{n}
{n}
with the X display number of the TurboVNC
server session you wish to kill.
To list the X display numbers and process ID’s of all TurboVNC server sessions that are currently running under your user account on this machine, run
/opt/TurboVNC/bin/vncserver -list
/opt/SUNWtvnc/bin/vncserver -list
When a TurboVNC server session is created, it automatically launches a miniature web server that serves up a Java TurboVNC viewer applet. This Java TurboVNC viewer can be used to connect to the TurboVNC server from a machine that does not have a native TurboVNC viewer installed (or a machine for which no native TurboVNC viewer is available.) The Java viewer is significantly slower than the native viewer on high-speed networks, but on low-speed networks the Java viewer and native viewers have comparable performance. The Java viewer does not currently support double buffering.
To use the Java TurboVNC viewer, point your web browser to:
http://turbovnc_server:{5800+n}
where {turbovnc_server}
is the hostname or IP address
of the TurboVNC server machine, and n
is the X display
number of the TurboVNC server session to which you want to connect.
Example: If the TurboVNC server is running on X display my_server:1
,
then point your web browser to:
http://my_server:5801
To get the peak performance out of TurboVNC, you must give it a hint about the type of network that separates your client machine from the TurboVNC server. To do this, select a Connection Profile when launching the TurboVNC viewer.
In the Windows TurboVNC viewer, there are three buttons in the TurboVNC Connection dialog box that allow you to easily select the connection profile. In the Java viewer, the same thing is accomplished by clicking the “Options” button at the top of the browser window. With the Linux/Solaris TurboVNC viewer, you can either use command line options to set the connection profile prior to connecting, or you can press the F8 key after connecting to pop up a menu from which you can select the connection profile.
Linux/Solaris TurboVNC viewer | Windows & Java TurboVNC viewers | |
---|---|---|
High-bandwidth, low-latency network | No action necessary | Select “High-Speed Network” Connection Profile. |
Low-bandwidth, high-latency network (favor performance over image quality) | Pass argument of -broadband to vncviewer or select “Preset: Broadband (favor performance)” from the F8 popup menu |
Select “Broadband (favor performance)” Connection Profile. |
Low-bandwidth, high-latency network (favor image quality over performance) | Pass argument of -wan to vncviewer or select “Preset: Broadband (favor image quality)” from the F8 popup menu |
Select “Broadband (favor image quality)” Connection Profile. |
The “High-Speed Network” and “Broadband (favor image quality)” connection profiles set the JPEG compression quality to a high enough level that the compression loss is not perceivable by the human eye. The “Broadband (favor performance)” connection profile sets the image quality to a very low (but still usable) level which will achieve interactive performance on typical broadband connections.
Normally, the connection between the TurboVNC server and the TurboVNC viewer is completely unencrypted, but securing that connection can be easily accomplished by using the port forwarding feature of Secure Shell (SSh.) After you have started a TurboVNC server session on the server machine, open a new SSh connection into the server machine using the following command line:
ssh -L {5900+n}:localhost:{5900+n} {user}@{server}
If using PuTTY, replace ssh
with putty
in the above example.
Replace {user}
with your user account name on the TurboVNC
server and {server}
with the hostname or IP address of
that server. Replace n
with the X display number of the
TurboVNC server session to which you want to connect.
For instance, if you wish to connect to display :1
on
server my_server
using user account my_user
,
you would type
ssh -L 5901:localhost:5901 my_user@my_server
After the SSh connection has been established, you can then launch
the TurboVNC viewer and point it to localhost:{n}
(localhost:1
in the above example.)
For LAN connections and other high-speed networks, tunneling the TurboVNC connection over SSh will reduce performance by as much as 20% (50% if using PuTTY.) But for wide-area networks, broadband, etc., there is no performance penalty for using SSh tunneling with TurboVNC.
For more detailed instructions on the usage of TurboVNC:
man -M /opt/TurboVNC/man {vncserver | Xvnc | vncviewer | vncconnect | vncpasswd}
man -M /opt/SUNWtvnc/man {vncserver | Xvnc | vncviewer | vncconnect | vncpasswd}
The TightVNC documentation:
http://www.tightvnc.com/docs.html
might also be helpful, since TurboVNC is based on TightVNC and shares many of its features.
The previous chapter described how to use VirtualGL in Raw Mode with TurboVNC, but much of this information is also applicable to other X proxies, such as RealVNC, NX, etc. Generally, none of these other solutions will provide anywhere near the performance of TurboVNC, but some of them have capabilities that TurboVNC lacks (NX, for instance, can do seamless windows.)
VirtualGL reads the value of the DISPLAY
environment variable
to determine whether to enable Raw Mode by default. If DISPLAY
begins with a colon (“:
”) or with “unix:
”,
then VirtualGL will enable Raw Mode as the default. This should effectively
make Raw Mode the default for most X proxies, but if for some reason
it doesn’t, then you can force the use of Raw Mode by setting
VGL_COMPRESS
to 0
or passing an argument
of -c 0
to vglrun
.
The previous chapter described how to use Raw Mode over a server network to send uncompressed pixels from a VirtualGL server to a TurboVNC server. But Raw Mode can also be used to send uncompressed pixels to a client machine. There are two main reasons why you might want to do this:
The procedure for using Raw Mode over a network is the same as the procedure for using Direct Mode with a Direct X11 Connection – with the following notable differences:
VGL_COMPRESS
to 0
or pass an argument of -c 0
to vglrun
when launching VirtualGL. Otherwise, VirtualGL will detect that the
connection to the X server is remote, and it will automatically try
to enable Direct Mode. Setting VGL_COMPRESS
to 0
forces the use of Raw Mode, regardless of whether the X server is local
or remote.
Do not use SSh X11 tunneling with Raw Mode, as this will reduce the performance by 80% or more. It is necessary to use a direct X11 connection to sustain an interactive frame rate with Raw Mode on Gigabit networks.
vglrun
and Solaris Shell Scriptsvglrun
can be used to launch either binary executables
or shell scripts, but there are a few things to keep in mind when using
vglrun
to launch a shell script on Solaris. When you
vglrun
a shell script, the VirtualGL faker library will
be preloaded into every executable that the script launches. Normally
this is innocuous, but if the script calls any executables that are
setuid root, then Solaris will refuse to load those executables because
you are attempting to preload a library (VirtualGL) that is not in
a “secure path.” Solaris keeps a tight lid on what goes
into /usr/lib
and /lib
, and by default, it
will only allow libraries in those paths to be preloaded into an executable
that is setuid root. Generally, 3rd party packages are verboden from
installing anything into /usr/lib
or /lib
.
But you can use the crle
utility to add other directories
to the operating system’s list of secure paths. In the case
of VirtualGL, you would execute the following commands (as root):
crle -u -s /opt/SUNWvgl/lib crle -64 -u -s /opt/SUNWvgl/lib/64
But please be aware of the security ramifications of this before you do it. You are essentially telling Solaris that you trust the security and stability of the VirtualGL code as much as you trust the security and stability of the operating system. And while we’re flattered, we’re not sure that we’re necessarily deserving of that accolade, so if you are in a security critical environment, apply the appropriate level of paranoia here.
An easier, and perhaps more secure, approach is to simply edit the
application script and make it store the value of the LD_PRELOAD
environment variables until right before the actual executable is run.
For instance, take the following application script (please):
Contents of application.sh
:
#!/bin/sh some_setuid_binary some_application_binary
You would modify the script as follows:
Contents of application.sh
:
#!/bin/sh LD_PRELOAD_32_SAVE=$LD_PRELOAD_32 LD_PRELOAD_64_SAVE=$LD_PRELOAD_64 LD_PRELOAD_32= LD_PRELOAD_64= export LD_PRELOAD_32 LD_PRELOAD_64 some_setuid_binary LD_PRELOAD_32=$LD_PRELOAD_32_SAVE LD_PRELOAD_64=$LD_PRELOAD_64_SAVE export LD_PRELOAD_32 LD_PRELOAD_64 some_application_binary
vglrun
on Solaris has two options that are relevant to
launching scripts:
vglrun -32 {script}
will preload VirtualGL only into 32-bit executables called by a script, whereas
vglrun -64 {script}
will preload VirtualGL only into 64-bit executables. So if, for instance,
the setuid binary that the script is calling is a 32-bit executable
and the application is a 64-bit executable, then you could use vglrun -64
to launch the application script.
The lion’s share of OpenGL applications are dynamically linked
against libGL.so
, and thus libGL.so
is automatically
loaded whenever the application loads. Whenever vglrun
is used to launch such applications, VirtualGL is loaded ahead of libGL.so
,
meaning that OpenGL and GLX symbols are resolved from VirtualGL first
and the “real” OpenGL library second.
However, some applications (particularly games) are not dynamically
linked against libGL.so
. These applications typically
call dlopen()
and dlsym()
later on in the
program’s execution to manually load OpenGL and GLX symbols from
libGL.so
. Such applications also generally provide a
mechanism (usually either an environment variable or a command line
argument) which allows the user to specify a library that can be loaded
instead of libGL.so
.
So let’s assume that you just downloaded the latest version of
the Linux game Foo Wars from the Internet, and (for whatever reason)
you want to run the game in a VNC session. The game provides a command
line switch -g
which can be used to specify an OpenGL
library to load other than libGL.so
. You would launch
the game using a command line such as this:
vglrun foowars -g /usr/lib/librrfaker.so
You still need to use vglrun
to launch the game, because
VirtualGL must also intercept a handful of X11 calls. Using vglrun
allows VGL to intercept these calls, whereas using the game’s
built-in mechanism for loading a substitute OpenGL library allows VirtualGL
to intercept the GLX and OpenGL calls.
In some cases, the application doesn’t provide an override mechanism
such as the above. In these cases, you should pass an argument of
-dl
to vglrun
when starting the application.
For example:
vglrun -dl foowars
Passing -dl
to vglrun
forces another library
to be loaded ahead of VirtualGL and libGL.so
. This new
library intercepts any calls to dlopen()
and forces the
application to open VirtualGL instead of libGL.so
.
Chapter 15 contains specific recipes for getting a variety of games and other applications to work with VirtualGL.
Chromium is a powerful framework for performing various types of parallel OpenGL rendering. It is usually used on clusters of commodity Linux PC’s to divide up the task of rendering scenes with large geometries or large pixel counts (such as when driving a display wall.) Chromium is most often used in one of three configurations:
Sort-First Rendering (Image-Space Decomposition) is used to overcome the fill-rate limitations of individual graphics cards. When configured to use sort-first rendering, Chromium divides up the scene based on which polygons will be visible in a particular section of the final image. It then instructs each node of the cluster to render only the polygons that are necessary to generate the image section (“tile”) for that node. This is primarily used to drive high-resolution displays that would be impractical to drive from a single graphics card due to limitations in the card’s framebuffer memory, processing power, or both. Configuration 1 could be used, for instance, to drive a CAVE, video wall, or even an extremely high-resolution monitor. In this configuration, each Chromium node generally uses all of its screen real estate to render a section of the multi-screen image.
VirtualGL is generally not very useful with Configuration 1. You could
theoretically install a separate copy of VirtualGL on each display
node and use it to redirect the output of each crserver
instance to a multi-screen X server running elsewhere on the network.
But there would be no way to synchronize the screens on the remote
end. Chromium uses DMX to synchronize the screens in a multi-screen
configuration, and VirtualGL would have to be made DMX-aware for it
to perform the same job. Maybe at some point in the future …
If you have a need for such a configuration,
let
us know.
Configuration 2 uses the same sort-first principle as Configuration 1, except that each tile is only a fraction of a single screen, and the tiles are recombined into a single window on Node 0. This configuration is perhaps the least often used of the three, but it is useful in cases where the scene contains a large amount of textures (such as in volume rendering) and thus rendering the whole scene on a single node would be prohibitively slow due to fill-rate limitations.
In this configuration, the application is allowed to choose a visual,
create an X window, and manage the window as it would normally do.
But all other OpenGL and GLX activity is intercepted by the Chromium
App Faker (CrAppFaker) so that the rendering task can be split up among
the rendering nodes. Once each node has rendered its section of the
final image, the tiles get passed back to a Chromium Server (CrServer)
process running on Node 0. This CrServer process attaches to the previously-created
application window and draws the pixels into it using glDrawPixels()
.
The general strategy for making this work with VirtualGL is to first
make it work without VirtualGL and then insert VirtualGL only into
the processes that run on Node 0. VirtualGL must be inserted into
the CrAppFaker process to prevent CrAppFaker from sending glXChooseVisual()
calls to the X server (which would fail if the X server is a VNC server
or otherwise does not provide GLX.) VirtualGL must be inserted into
the CrServer process on Node 0 to prevent it from sending glDrawPixels()
calls to the X server (which would effectively send uncompressed images
over the network.) Instead, VirtualGL forces CrServer to draw into
a Pbuffer, and VGL takes charge of transmitting those pixels to the
destination X server in the most efficient way possible.
Since Chromium uses dlopen()
to load the system’s
OpenGL library, preloading VirtualGL into the CrAppFaker and CrServer
processes using vglrun
is not sufficient. Fortunately,
Chromium provides an environment variable, CR_SYSTEM_GL_PATH
,
which allows one to specify an alternate path in which it will search
for the system’s libGL.so
. The VirtualGL packages
for Linux and Solaris include a symbolic link named libGL.so
which really points to the VirtualGL faker library (librrfaker.so
)
instead. This symbolic link is located in its own isolated directory,
so that directory can be passed to Chromium in the CR_SYSTEM_GL_PATH
environment variable, thus causing Chromium to load VirtualGL rather
than the “real” OpenGL library. Refer to the following
table:
32-bit Applications | 64-bit Applications | |
---|---|---|
Linux | /opt/VirtualGL/lib |
/opt/VirtualGL/lib64 |
Solaris | /opt/SUNWvgl/fakelib |
/opt/SUNWvgl/fakelib/64 |
CR_SYSTEM_GL_PATH
setting required to use VirtualGL with ChromiumRunning the CrServer in VirtualGL is simply a matter of setting this
environment variable and then invoking crserver
with vglrun
.
For example:
export CR_SYSTEM_GL_PATH=/opt/VirtualGL/lib vglrun crserver
In the case of CrAppFaker, it is also necessary to set VGL_GLLIB
to the location of the “real” OpenGL library (example:
/usr/lib/libGL.so.1
.) CrAppFaker creates its own fake
version of libGL.so
which is really just a copy of Chromium’s
libcrfaker.so
. So VirtualGL, if left to its own devices,
will unwittingly try to load libcrfaker.so
instead of
the “real” OpenGL library. Chromium’s libcrfaker.so
will in turn try to load VirtualGL again, and an endless loop will
occur.
So what we want to do is something like this:
export CR_SYSTEM_GL_PATH=/opt/VirtualGL/lib export VGL_GLLIB=/usr/lib/libGL.so.1 crappfaker
CrAppFaker will copy the application to a temp directory and then copy
libcrfaker.so
to that same directory, renaming it as libGL.so
.
So when the application is started, it loads libcrfaker.so
instead of libGL.so
. libcrfaker.so
will then
load VirtualGL instead of the “real” libGL, because we’ve
overridden CR_SYSTEM_GL_PATH
to make Chromium find VirtualGL’s
fake libGL.so
first. VirtualGL will then use the library
specified in VGL_GLLIB
to make any “real”
OpenGL calls that it needs to make.
Note that crappfaker
should not be invoked with vglrun
.
So, putting this all together, here is an example of how you might start a sort-first rendering job using Chromium and VirtualGL:
crserver
on each of the rendering nodes
CR_SYSTEM_GL_PATH
to the appropriate value
for the operating system and application type (see table above)
vglrun crserver &
VGL_GLLIB
to the location of the “real”
libGL (example: /usr/lib/libGL.so.1
or /usr/lib64/libGL.so.1
.)
crappfaker
(do not use vglrun
here)
Again, it’s always a good idea to make sure this works without VirtualGL before adding VirtualGL into the mix.
When using VirtualGL with this mode, resizing the application window may not work properly. This is because the resize event is sent to the application process, and therefore the CrServer process that’s actually drawing the pixels has no way of knowing that a window resize has occurred. A possible fix is to modify Chromium such that it propagates the resize event down the render chain so that all of the CrServer processes are aware that a resize event occurred.
Sort-Last Rendering is used when the scene contains a huge number of polygons and the rendering bottleneck is processing all of that geometry on a single graphics card. In this case, each node runs a separate copy of the application, and for best results, the application needs to be at least partly aware that it’s running in a parallel environment so that it can give Chromium hints as to how to distribute the various objects to be rendered. Each node generates an image of a particular portion of the object space, and these images must be composited in such a way that the front-to-back ordering of pixels is maintained. This is generally done by collecting Z buffer data from each node to determine whether a particular pixel on a particular node is visible in the final image. The rendered images from each node are often composited using a “binary swap”, whereby the nodes combine their images in a cascading tree so that the overall compositing time is proportional to log2(N) rather than N.
To make this configuration work with VirtualGL:
crappfaker
on each of the rendering nodes
CR_SYSTEM_GL_PATH
to the appropriate value
for the operating system and application type (see table in Section
14.2.)
vglrun crserver
The Chromium Utility Toolkit provides a convenient way for graphics
applications to specifically take advantage of Chromium’s sort-last
rendering capabilities. Such applications can use CRUT to explicitly
specify how their object space should be decomposed. CRUT applications
require an additional piece of software, crutserver
, to
be running on Node 0. So to make such applications work with VirtualGL:
crappfaker
on each of the rendering nodes
CR_SYSTEM_GL_PATH
to the appropriate value
for the operating system and application type (see table in Section
14.2.)
vglrun crutserver &
vglrun crserver
Chromium’s use of X11 is generally not very optimal. It assumes a very fast connection between the X server and the Chromium Server. In certain modes, Chromium polls the X server on every frame to determine whether windows have been resized, etc. Thus, we have observed that, even on a fast network, Chromium tends to perform much better with VirtualGL running in a TurboVNC session as opposed to VirtualGL running in Direct Mode.
ModViz Virtual Graphics PlatformTM is a polished commercial clustered rendering framework for Linux which supports all three of the rendering modes described above and provides a much more straightforward interface to configure and run these types of parallel rendering jobs.
All VGP jobs, regardless of configuration, are all spawned through
vglauncher
, a front-end program which automatically takes
care of starting the appropriate processes on the rendering nodes,
intercepting OpenGL calls from the application instance(s), sending
rendered images back to Node 0, and compositing the images as appropriate.
In a similar manner to VirtualGL’s vglrun
, VGP’s
vglauncher preloads a library (libVGP.so
) in place of
libGL.so
, and this library intercepts the OpenGL calls
from the application.
So our strategy here is similar to our strategy for loading the Chromium
App Faker. We want to insert VirtualGL between VGP and the real system
OpenGL library, so that VGP will call VirtualGL and VirtualGL will
call libGL.so
. Achieving this with VGP is relatively simple:
export VGP_BACKING_GL_LIB=librrfaker.so vglrun vglauncher --preload=librrfaker.so:/usr/lib/libGL.so {application}
Replace /usr/lib/libGL.so
with the full path of your system’s
OpenGL library (/usr/lib64/libGL.so
if you are launching
a 64-bit application.)
Application | Platform | Recipe | Notes |
---|---|---|---|
ANSA v12.1.0 | Linux/x86 | Add LD_PRELOAD_SAVE=$LD_PRELOAD export LD_PRELOAD= to the top of the ansa.sh script, then add export LD_PRELOAD=$LD_PRELOAD_SAVE just prior to the ${ANSA_EXEC_DIR}bin/ansa_linux${ext2} line. |
The ANSA startup script directly invokes /lib/libc.so.6 to query the glibc version. Since the VirtualGL faker depends on libc, preloading VirtualGL when directly invoking libc.so.6 creates an infinite loop. So it is necessary to disable the preloading of VirtualGL in the application script and then re-enable it prior to launching the actual application. |
Army Ops | Linux/x86 | vglrun -dl armyops |
See Chapter 13 for more details |
Descent 3 | Linux/x86 | vglrun descent3 -g /usr/lib/librrfaker.so or vglrun -dl descent3 |
See Chapter 13 for more details |
Doom 3 | Linux/x86 | vglrun doom3 +set r_glDriver /usr/lib/librrfaker.so or vglrun -dl doom3 |
See Chapter 13 for more details |
Enemy Territory (Return to Castle Wolfenstein) | Linux/x86 | vglrun et +set r_glDriver /usr/lib/librrfaker.so or vglrun -dl et |
See Chapter 13 for more details |
Heretic II | Linux/x86 | vglrun heretic2 +set gl_driver /usr/lib/librrfaker.so +set vid_ref glx or vglrun -dl heretic2 +set vid_ref glx |
See Chapter 13 for more details |
Heavy Gear II | Linux/x86 | vglrun hg2 -o /usr/lib/librrfaker.so or vglrun -dl hg2 |
See Chapter 13 for more details |
I-deas Master Series 9, 10, & 11 | Solaris/Sparc | When running I-deas with VirtualGL on a Solaris/Sparc server, remotely displaying to a non-Sparc client machine or to an X proxy such as VNC, it may be necessary to set the SDRC_SUN_IGNORE_GAMMA environment variable to 1 . |
I-deas normally aborts if it detects that the X visual assigned to it is not gamma-corrected. But gamma-corrected X visuals only exist on Solaris/Sparc X servers, so if you are displaying the application to another type of X server or X proxy which doesn’t provide gamma-corrected X visuals, then it is necessary to override the gamma detection mechanism in I-deas. |
Java2D applications that use OpenGL | Linux, Solaris | Java2D will use OpenGL to perform its rendering if sun.java2d.opengl is set to True . For example: java -Dsun.java2d.opengl=True MyAppClass In order for this to work in VirtualGL, it is necessary to invoke vglrun with the -dl switch. For example: vglrun -dl java -Dsun.java2d.opengl=True MyAppClass If you are using Java v6 b92 or later, you can also set the environment variable J2D_ALT_LIBGL_PATH to the path of librrfaker.so . For example: setenv J2D_ALT_LIBGL_PATH /opt/SUNWvgl/lib/librrfaker.so vglrun java -Dsun.java2d.opengl=True MyAppClass |
See Chapter 13 for more details |
Java2D applications that use OpenGL | Solaris/Sparc | When VirtualGL is used in conjunction with Java v5.0 (also known as Java 1.5.0) to remotely display Java2D applications using the OpenGL pipeline (see above), certain Java2D applications will cause the OpenGL subsystem to crash with the following error: thread tries to access GL context current to another thread If you encounter this error, try setting the SUN_OGL_IS_MT environment variable to 1 and re-running the application. |
Java 5.0 should call glXInitThreadsSUN() since it is using multiple OpenGL threads, but it doesn’t. Purely by chance, this doesn’t cause any problems when the application is displayed locally. But VirtualGL changes things up enough that the luck runs out. This issue does not exist in Java 6. |
Pro/ENGINEER Wildfire v2.0 | Solaris/Sparc | Add graphics opengl to ~/config.pro . You may also need to set the VGL_XVENDOR environment variable to "Sun Microsystems, Inc." if you are running Pro/ENGINEER 2.0 over a remote X connection to a Linux or Windows VirtualGL client. |
Pro/E 2.0 for Solaris will disable OpenGL if it detects a remote connection to a non-Sun X server. |
Pro/ENGINEER Wildfire v3.0 | Solaris/Sparc | When using Direct Mode, set the environment variable VGL_INTERFRAME to 0 on the VirtualGL server prior to launching Pro/E v3. |
Pro/E v3 frequently renders to the front buffer and, for unknown reasons, sends long sequences of glFlush() calls (particularly in wireframe mode) even if nothing new has been rendered. This causes VGL to send long sequences of duplicate images into the Direct Mode image pipeline. If interframe comparison is enabled, the overhead of comparing these duplicate images can lead to slow application performance when zooming in or out in Pro/E. It’s faster to disable interframe comparison in this case and simply let VGL’s frame spoiling system discard any frames that it can’t send in real time. This results in only a few of the duplicate frames being sent to the client with no CPU time wasted on comparing the hundreds of other duplicate frames that won’t be sent. |
QGL (OpenGL Qt Widget) | Linux | vglrun -dl {application} |
Qt can be built such that it either resolves symbols from libGL automatically or uses dlopen() to manually resolve those symbols from libGL. As of Qt v3.3, the latter behavior is the default, so OpenGL programs built with later versions of libQt will not work with VirtualGL unless the -dl switch is used with vglrun . See Chapter 13 for more details |
Quake 3 | Linux/x86 | vglrun quake3 +set r_glDriver /usr/lib/librrfaker.so or vglrun -dl quake3 |
See Chapter 13 for more details |
Soldier of Fortune | Linux/x86 | vglrun sof +set gl_driver /usr/lib/librrfaker.so or vglrun -dl sof |
See Chapter 13 for more details |
Unreal Tournament 2004 | Linux/x86 | vglrun -dl ut2004 |
See Chapter 13 for more details |
VisConcept | Solaris/Sparc | Set the environment variable VGL_GUI_XTTHREADINIT to 0 . |
Popping up the VirtualGL configuration dialog may cause the application to hang unless you set this environment variable. See Section 19.1 for more details. |
The general idea behind VirtualGL is to offload the 3D rendering work to the server so that the client only has to draw 2D images. Normally, the VirtualGL and TurboVNC clients use 2D image drawing commands to display the rendered 3D images from the VirtualGL server, thus eliminating the need for a 3D graphics card on the client machine. But drawing stereo images requires a 3D graphics card, so such a card must be present in any client machine that will use VirtualGL with stereographic rendering. Since the 3D graphics card is only being used to draw images, it need not necessarily be a high-end card. Generally, the least expensive 3D graphics card that has stereo capabilities will work fine in a VirtualGL client.
The server must also have a 3D graphics card that supports stereo, since this is the only way that VirtualGL can obtain a stereo Pbuffer. When an application requests a stereo visual, VirtualGL will return a stereo visual to the application only if:
It is usually necessary to explicitly enable stereo visuals in the graphics card configuration for both the client and server machines. The Troubleshooting section below lists a way to verify that both client and server have stereo visuals available.
If, for any given frame, VirtualGL detects that the application has
drawn anything to the right eye buffer, VGL will read back both eye
buffers and send the contents as a pair of compressed images (one for
each eye) to the VirtualGL client. The VGL client then decompresses
the stereo image pair and draws it as a single stereo frame to the
client’s display using glDrawPixels()
. It should
thus be no surprise that stereo performs, at best, only half as fast
as mono, since VirtualGL must compress twice as much data on the server
and use twice as much network bandwidth to send the stereo images to
the client.
Stereo requires Direct Mode. If VirtualGL is running in Raw Mode and the application renders something in stereo, only the contents of the left eye buffer will be sent to the X display.
Transparent overlays have similar requirements and restrictions as
stereo. In this case, VirtualGL completely bypasses its own GLX faker
and uses indirect OpenGL rendering to render the transparent overlay
on the client machine’s 3D graphics card. The underlay is still
rendered on the server, as always. Using indirect rendering to render
the overlay is unfortunately necessary, because there is no reliable
way to draw to an overlay using 2D (X11) functions, there are severe
performance issues (on some cards) with using glDrawPixels()
to draw to the overlay, and there is no reasonable way to composite
the overlay and underlay on the VirtualGL server.
The use of overlays is becoming more and more infrequent, and when they are used, it is generally only for drawing small, simple, static shapes and text. We have found that it is often faster to send the overlay geometry over to the client rather than rendering it as an image and sending the image. So even if it were possible to implement overlays without using indirect rendering, it’s likely that indirect rendering of overlays would still be the fastest approach for most applications.
As with stereo, overlays must sometimes be explicitly enabled in the graphics card’s configuration. In the case of overlays, however, they need only be supported and enabled on the client machine.
Indexed color (8-bit) overlays have been tested and are known to work
with VirtualGL. True color (24-bit) overlays work in theory but have
not been tested. Use glxinfo
(see
Troubleshooting
below) to verify whether your client’s X display supports overlays
and whether they are enabled. In Exceed 3D, make sure that the “Overlay
Support” option is checked in the “Exceed 3D and GLX”
applet:
Overlays do not work with X proxies (including TurboVNC.) VirtualGL must be displaying to a real X server on the client machine (either using Direct Mode or Raw Mode.)
In a PseudoColor visual, each pixel is represented by an index which refers to a location in a color table. The color table stores the actual color values (256 of them in the case of 8-bit PseudoColor) which correspond to each index. An application merely tells the X server which color index to use when drawing, and the X server takes care of mapping that index to an actual color from the color table. OpenGL allows for rendering to Pseudocolor visuals, and it does so by being intentionally ignorant of the relationship between indices and actual colors. As far as OpenGL is concerned, each color index value is just a meaningless number, and it is only when the final image is drawn by the X server that these numbers take on meaning. As a result, many pieces of OpenGL’s core functionality, such as lighting and shading, either have undefined behavior or do not work at all with PseudoColor rendering. PseudoColor rendering used to be a common technique to visualize scientific data, because such data often only contained 8 bits per sample to begin with. Applications could manipulate the color table to allow the user to dynamically control the relationship between sample values and colors. As more and more graphics cards drop support for PseudoColor rendering, however, the applications which use it are becoming a vanishing breed.
VirtualGL supports PseudoColor rendering if a PseudoColor visual is
available on the client’s display. A PseudoColor visual need
not be present on the server. On the server, VirtualGL uses the red
channel of a standard RGB Pbuffer to store the color index. Upon receiving
an end of frame trigger, VirtualGL reads back the red channel of the
Pbuffer and uses XPutImage()
to draw the color indices
into the appropriate X window. To put this another way, PseudoColor
rendering in VirtualGL always uses Raw Mode. However, since there
is only 1 byte per pixel in a PseudoColor “image”, the
images can still be sent to the client reasonably quickly even though
they are uncompressed.
PseudoColor rendering should work in VNC, provided that the VNC server is configured with an 8-bit color depth. TurboVNC does not support PseudoColor, but RealVNC and other VNC flavors do. Note, however, that VNC cannot provide both PseudoColor and TrueColor visuals at the same time.
VirtualGL includes a modified version of glxinfo
that
can be used to determine whether or not the client and server have
stereo, overlay, or Pseudocolor visuals enabled.
Run one of the following command sequences on the VirtualGL server to determine whether the server has a suitable visual for stereographic rendering:
/opt/SUNWvgl/bin/glxinfo -d {glp_device} -v
xauth merge /etc/opt/VirtualGL/vgl_xauth_key /opt/SUNWvgl/bin/glxinfo -display :0 -c -v
xauth merge /etc/opt/VirtualGL/vgl_xauth_key /opt/VirtualGL/bin/glxinfo -display :0 -c -v
One or more of the visuals should say “stereo=1” and should list “Pbuffer” as one of the “Drawable Types.”
Run one of the following command sequences on the VirtualGL server to determine whether the X display on the client has a suitable visual for stereographic rendering, transparent overlays, or Pseudocolor.
xauth merge /etc/opt/VirtualGL/vgl_xauth_key /opt/SUNWvgl/bin/glxinfo -v
xauth merge /etc/opt/VirtualGL/vgl_xauth_key /opt/VirtualGL/bin/glxinfo -v
In order to use stereo, one or more of the visuals should say “stereo=1”. In order to use transparent overlays, one or more of the visuals should say “level=1”, should list a “Transparent Index” (non-transparent visuals will say “Opaque” instead), and should have a class of “PseudoColor.” In order to use PseudoColor (indexed) rendering, one of the visuals should have a class of “PseudoColor.”
The easiest way to uncover bottlenecks in the VirtualGL pipeline is
to set the VGL_PROFILE
environment variable to 1
on both server and client (passing an argument of +pr
to vglrun
on the server has the same effect.) This will
cause VirtualGL to measure and report the throughput of the various
stages in its pipeline. For example, here are some measurements from
a dual Pentium 4 server communicating with a Pentium III client on
a 100 Megabit LAN:
Readback - 43.27 Mpixels/sec - 34.60 fps Compress 0 - 33.56 Mpixels/sec - 26.84 fps Total - 8.02 Mpixels/sec - 6.41 fps - 10.19 Mbits/sec (18.9:1)
Decompress - 10.35 Mpixels/sec - 8.28 fps Blit - 35.75 Mpixels/sec - 28.59 fps Total - 8.00 Mpixels/sec - 6.40 fps - 10.18 Mbits/sec (18.9:1)
The total throughput of the pipeline is 8.0 Megapixels/sec, or 6.4 frames/sec, indicating that our frame is 8.0 / 6.4 = 1.25 Megapixels in size (a little less than 1280 x 1024 pixels.) The readback and compress stages, which occur in parallel on the server, are obviously not slowing things down. And we’re only using 1/10 of our available network bandwidth. So we look to the client and discover that its slow decompression speed (10.35 Megapixels/second) is the primary bottleneck. Decompression and blitting on the client do not occur in parallel, so the aggregate performance is the harmonic mean of the decompression and blitting rates: [1/ (1/10.35 + 1/35.75)] = 8.0 Mpixels/sec.
By default, VirtualGL will only send a frame to the client if the client is ready to receive it. If a rendered frame arrives at the server’s queue and a previous frame is still being processed, the new frame is dropped (“spoiled.”) This prevents a backlog of frames on the server, which would cause a perceptible delay in the responsiveness of interactive applications. But when running non-interactive applications, particularly benchmarks, it is desirable to disable frame spoiling. With frame spoiling disabled, the server will render frames only as quickly as VirtualGL can send those frames to the client, which will conserve server resources as well as allow OpenGL benchmarks to accurately measure the frame rate of the VirtualGL system. With frame spoiling enabled, these benchmarks will report meaningless data, since they are measuring the rate at which the server can render frames, and that frame rate is decoupled from the rate at which VirtualGL can send those frames to the client.
In a VNC environment, there is another layer of frame spoiling, since the server only sends updates to the client when the client requests them. So even if frame spoiling is disabled in VirtualGL, OpenGL benchmarks will still report meaningless data if they are run in a VNC session.
There are only two ways to accurately benchmark an application in VirtualGL:
To disable frame spoiling, set the VGL_SPOIL
environment
variable to 0
on the server or pass an argument of -sp
to vglrun
. See Section 19.1
for more details.
VirtualGL includes several tools which can be useful in diagnosing performance problems with the system.
NetTest is a network benchmark that uses the same network I/O classes
as VirtualGL. It can be used to test the latency and throughput of
any TCP/IP connection, with or without SSL encryption. The VirtualGL
Linux package installs NetTest in /opt/VirtualGL/bin
.
The VirtualGL Solaris package installs it in /opt/SUNWvgl/bin
.
The Windows installer installs it in c:\program files\VirtualGL-{version}-{build}
by default.
To use NetTest, first start up the nettest server on one end of the connection:
nettest -server [-ssl]
(use -ssl
if you want to test the performance of SSL encryption
over this particular connection.)
Next, start the client on the other end of the connection:
nettest -client {server} [-ssl]
Replace {server}
with the hostname or IP address of the
machine where the NetTest server is running. Use -ssl
if the NetTest server is running in SSL mode.)
The nettest client will produce output similar to the following:
TCP transfer performance between localhost and {server}: Transfer size 1/2 Round-Trip Throughput (bytes) (msec) (MB/sec) 1 0.176896 0.005391 2 0.179391 0.010632 4 0.181600 0.021006 8 0.181292 0.042083 16 0.181694 0.083981 32 0.181690 0.167965 64 0.182010 0.335339 128 0.182197 0.669991 256 0.183593 1.329795 512 0.183800 2.656586 1024 0.186189 5.245015 2048 0.379702 5.143834 4096 0.546805 7.143778 8192 0.908712 8.597335 16384 1.643810 9.505359 32768 2.961701 10.551368 65536 5.769007 10.833754 131072 11.313003 11.049232 262144 22.412990 11.154246 524288 44.760510 11.170561 1048576 89.294810 11.198859 2097152 178.426602 11.209091 4194304 356.547194 11.218711
We can see that the throughput peaks at about 11.2 MB/sec. 1 MB = 1048576 bytes, so 11.2 MB/sec = 94 million bits per second, which is pretty good for a 100 Megabit connection. We can also see that, for small transfer sizes, the round-trip time is dominated by latency. The “latency” is the same thing as the 1/2 round-trip time for a zero-byte packet, which is about 0.18 milliseconds in this case.
CPUstat is available only in the VirtualGL Linux packages and is located
in the same place as NetTest (/opt/VirtualGL/bin
.) It
measures the average, minimum, and peak CPU usage for all processors
combined and for each processor individually. On Windows, this same
functionality is provided in the Windows Performance Monitor, which
is part of the operating system. On Solaris, the same data can be
obtained through vmstat
.
CPUstat measures the CPU usage over a given sample period (a few seconds) and continuously reports how much the CPU was utilized since the last sample period. Output for a particular sample looks something like this:
ALL : 51.0 (Usr= 47.5 Nice= 0.0 Sys= 3.5) / Min= 47.4 Max= 52.8 Avg= 50.8 cpu0: 20.5 (Usr= 19.5 Nice= 0.0 Sys= 1.0) / Min= 19.4 Max= 88.6 Avg= 45.7 cpu1: 81.5 (Usr= 75.5 Nice= 0.0 Sys= 6.0) / Min= 16.6 Max= 83.5 Avg= 56.3
The first column indicates what percentage of time the CPU was active since the last sample period (this is then broken down into what percentage of time the CPU spent running user, nice, and system/kernel code.) “ALL” indicates the average utilization across all CPU’s since the last sample period. “Min”, “Max”, and “Avg” indicate a running minimum, maximum, and average of all samples since cpustat was started.
Generally, if an application’s CPU usage is fairly steady, you can run CPUstat for a bit and wait for the Max. and Avg. for the “ALL” category to stabilize, then that will tell you what the application’s peak and average % CPU utilization is.
TCBench was born out of the need to compare VirtualGL’s performance to other thin client packages, some of which had frame spoiling features that couldn’t be disabled. TCBench measures the frame rate of a thin client system as seen from the client’s point of view. It does this by attaching to one of the client windows and continuously reading back a small area at the center of the window. While this may seem to be a somewhat non-rigorous test, experiments have shown that if care is taken to make sure that the application is updating the center of the window on every frame (such as in a spin animation), TCBench can produce quite accurate results. It has been sanity checked with VirtualGL’s internal profiling mechanism and with a variety of system-specific techniques, such as monitoring redraw events on the client’s windowing system.
The VirtualGL Linux package installs TCBench in /opt/VirtualGL/bin
.
The VirtualGL Solaris package installs TCBench in /opt/SUNWvgl/bin
.
The Windows installer installs it in c:\program files\VirtualGL-{version}-{build}
by default. Run tcbench
from the command line, and it
will prompt you to click in the window you want to measure. That
window should already have an automated animation of some sort running
before you launch TCBench.
TCBench can also be used to measure the frame rate of applications that are running on the local console, although for extremely fast applications (those that exceed 40 fps on the local console), you may need to increase the sampling rate of TCBench to get accurate results. The default sampling rate of 50 samples/sec should be fine for measuring the throughput of VirtualGL and other thin client systems.
tcbench -?
gives the relevant command line switches that can be used to adjust the benchmark time, the sampling rate, and the x and y offset of the sampling area within the window.
Several of VirtualGL’s configuration parameters can be changed
on the fly once an application has started. This is accomplished by
using the VirtualGL configuration dialog, which can be activated by
holding down the CTRL
and SHIFT
keys and
pressing the F9
key while any one of the application’s
windows is active. This displays a dialog box similar to the following:
You can use this dialog to enable or disable frame spoiling or to adjust the JPEG quality and subsampling. Changes are reflected immediately in the application.
The JPEG quality and subsampling gadgets will only be shown if VirtualGL is running in direct mode. In raw mode, the only setting that can be changed with this dialog is frame spoiling.
The VGL_GUI
environment variable can be used to change
the key sequence used to pop up the dialog box. If the default of
CTRL-SHIFT-F9
is not suitable, then set VGL_GUI
to any combination of ctrl
, shift
, alt
,
and one of {f1, f2,..., f12}
(these are not
case sensitive.) For example:
export VGL_GUI=CTRL-F9
will cause the dialog box to pop up whenever CTRL-F9
is
pressed.
To disable the VirtualGL dialog altogether, set VGL_GUI
to none
.
VirtualGL monitors the application’s X event loop to determine whenever a particular key sequence has been pressed. If an application is not monitoring key press events in its X event loop, then the VirtualGL configuration dialog might not pop up at all. There is unfortunately no workaround for this, but it should be a rare occurrence.
You can control the operation of the VirtualGL faker in four different ways. Each method of configuration takes precedence over the previous method:
/etc/profile
)
~/.bashrc
)
export VGL_XXX={whatever}
)
vglrun
.
This effectively overrides any previous environment variable setting
corresponding to that configuration option.
Environment Variable Name | vglrun Command-Line Override |
Description | Default Value |
---|---|---|---|
VGL_CLIENT |
-cl <client display> |
The X display where VirtualGL should send its image stream When running in Direct Mode, VirtualGL uses a dedicated TCP/IP connection to transmit compressed images of an application’s OpenGL rendering area from the VirtualGL server to the VirtualGL client. Thus, the VirtualGL server needs to know on which machine the VirtualGL client software is running, and it needs to know which X display on that machine will be used to draw the application’s GUI. VirtualGL can normally surmise this by reading the DISPLAY environment variable (which lists the hostname and X display where all X11 traffic will be sent.) But in cases where X11 traffic is tunneled through SSh or another type of indirect X11 connection, the DISPLAY environment variable on the VirtualGL server may not point to the client machine. In these cases, set VGL_CLIENT to the display where the application’s GUI will end up. For example: export VGL_CLIENT=my_client:0.0 If you are connecting to the VirtualGL server using SSh with X11 forwarding enabled, VirtualGL will try to guess an appropriate value for VGL_CLIENT based on the IP address of the SSh client, so you would only need to set VGL_CLIENT in this case if your configuration is unusual (such as if your client machine’s X server is occupying a display number other than 0 or if you are trying to forward VirtualGL’s image stream over SSh. See Chapter 9 for more details.) ** This option has no effect in “Raw” Mode. ** |
If SSh X11 forwarding is being used, VirtualGL will automatically set VGL_CLIENT to {ssh_client}:0.0 , where {ssh_client} is the IP address of the machine from which the SSh connection was initiated. Otherwise, VGL_CLIENT is unset, which tells VirtualGL to read the client hostname and X display from the DISPLAY environment variable instead. |
VGL_COMPRESS=0 VGL_COMPRESS=1 |
-c <0, 1> |
0 = Raw Mode (send rendered images uncompressed via. X11), 1 = Direct Mode (compress rendered images as JPEG & send on a separate socket) When this option is set to 0, VirtualGL will bypass its internal image compression pipeline and instead use XPutImage() to composite the rendered 3D images into the appropriate application window. This mode (“Raw Mode”) is primarily useful in conjunction with VNC, NX, or other remote display software that performs X11 rendering on the server and uses its own mechanism for compressing and transporting images to the client. Enabling Raw Mode on a remote X11 connection will result in uncompressed images being sent over the network, so it is unadvisable except on very fast networks (see Section 11.0.2.) If this option is not specified, then VirtualGL’s default behavior is to use Direct Mode when the application is being displayed to a remote X server and to use Raw Mode otherwise. VirtualGL assumes that if the DISPLAY environment variable begins with a colon or with “unix: ” (example: “:0.0 ”, “unix:1000.0 ”, etc.), then the X11 connection is local and thus doesn’t require image compression. Otherwise, it assumes that the X11 connection is remote and that compression is required. If the display string begins with “localhost ” or with the server’s hostname, VGL assumes that the display is being tunneled through SSh, and its default behavior is to use Direct Mode in this case. It is normally not necessary to set this configuration parameter unless you want to do something unusual (such as use Raw Mode over a remote X11 connection.) See Chapter 10 for more details. NOTE: Stereo does not work with Raw Mode. |
Compression enabled (“Direct Mode”) if the application is displaying to a remote X server, disabled (“Raw Mode”) otherwise. |
VGL_DISPLAY |
-d <display or GLP device> |
The display or GLP device to use for 3D rendering If your server has multiple 3D graphics cards and you want the OpenGL rendering to be redirected to a display other than :0, set VGL_DISPLAY=:1.0 or whatever. This could be used, for instance, to support many application instances on a beefy multi-pipe graphics server. GLP mode (Solaris/Sparc only): Setting this option to glp will enable GLP mode and use the first framebuffer device listed in /etc/dt/config/GraphicsDevices to perform 3D rendering. You can also set this option to the pathname of a specific GLP device (example: /dev/fbs/jfb0 .) See Section 7.1 for more details. |
:0 |
VGL_FPS |
-fps <floating point number greater than 0> |
Limit the client/server frame rate to the specified number of frames per second Setting VGL_FPS or passing -fps as an argument to vglrun will enable VirtualGL’s frame rate governor. When enabled, the frame rate governor will attempt to limit the overall throughput of the VirtualGL pipeline to the specified number of frames/second. If frame spoiling is disabled, this effectively limits the server’s 3D rendering frame rate as well. This option works regardless of whether VirtualGL is being run in Direct Mode (with compression enabled) or in Raw Mode (with compression disabled.) |
Frame rate governor disabled |
VGL_GAMMA=0 VGL_GAMMA=1 VGL_GAMMA=<gamma correction factor > |
-g or +g or -gamma <gamma correction factor> |
“Gamma” refers to the relationship between the intensity of light which your computer’s monitor is instructed to display and the intensity which it actually displays. The curve is an exponential curve of the form Y = XG, where X is between 0 and 1. G is called the “gamma” of the monitor. PC monitors and TV’s usually have a gamma of around 2.2. Some of the math involved in 3D rendering assumes a linear gamma (G = 1.0), so technically speaking, 3D applications will not display with mathematical correctness unless the pixels are “gamma corrected” to counterbalance the non-linear response curve of the monitor. But some systems do not have any form of built-in gamma correction, and thus the applications developed for such systems have usually been designed to display properly without gamma correction. Gamma correction involves passing pixels through a function of the form X = W1/G, where G is the “gamma correction factor” and should be equal to the gamma of the monitor. So the final output is Y = XG = (W1/G)G = W, which describes a linear relationship between the intensity of the pixels drawn by the application and the intensity of the pixels displayed by the monitor. VGL_GAMMA=1 or vglrun +g : Enable gamma correction with default settings This option tells VirtualGL to enable gamma correction using the best available method. If VirtualGL is remotely displaying to a Solaris/Sparc X server which has gamma-corrected X visuals, then VGL will attempt to assign one of these visuals to the application. This causes the 3D output of the application to be gamma corrected by the factor specified in fbconfig on the client machine (default: 2.22.) Otherwise, if the X server does not have gamma-corrected X visuals or if the gamma-corrected visuals it has do not match the application’s needs, then VirtualGL performs gamma correction internally and uses a default gamma correction factor of 2.22. This option emulates the default behavior of OpenGL applications running locally on Sparc machines. VGL_GAMMA=0 or vglrun -g : Disable gamma correction This option tells VGL not to use gamma-corrected visuals, even if they are available on the X server, and disables VGL’s internal gamma correction system as well. This emulates the default behavior of OpenGL applications running locally on Linux or Solaris/x86 machines. VGL_GAMMA={gamma correction factor} or vglrun -gamma {gamma correction factor} : Enable VGL’s internal gamma correction system with the specified gamma correction factor If VGL_GAMMA is set to an arbitrary floating point value, then VirtualGL performs gamma correction internally using the specified value as the gamma correction factor. You can also specify a negative value to apply a “de-gamma” function. Specifying a gamma correction factor of G (where G < 0) is equivalent to specifying a gamma correction factor of -1/G. |
VGL_GAMMA=1 on Solaris/Sparc VGL servers, VGL_GAMMA=0 otherwise |
VGL_GLLIB |
The location of an alternate OpenGL library Normally, VirtualGL loads the first OpenGL dynamic library that it finds in the dynamic linker path (usually /usr/lib/libGL.so.1 , /usr/lib64/libGL.so.1 , or /usr/lib/64/libGL.so.1 .) You can use this setting to explicitly specify another OpenGL dynamic library to load. Normally, you shouldn’t need to muck with this unless something doesn’t work. However, this setting is necessary when using VirtualGL with Chromium. |
||
VGL_GUI |
Key sequence used to invoke the configuration dialog VirtualGL will normally monitor an application’s X event queue and pop up the VirtualGL configuration dialog whenever CTRL-SHIFT-F9 is pressed. In the event that this interferes with a key sequence that the application is already using, you can redefine the key sequence used to pop up VGL’s configuration dialog by setting VGL_GUI to some combination of shift , ctrl , alt , and one of {f1, f2, ..., f12} . You can also set VGL_GUI to none to disable the configuration dialog altogether. See Chapter 18 for more details. |
shift-ctrl-f9 | |
VGL_GUI_XTTHREADINIT |
0 to prevent VGL from calling XtToolkitThreadInitialize() Xt & Motif applications are supposed to call XtToolkitThreadInitialize() if they plan to access Xt functions from two or more threads simultaneously. But rarely, a multi-threaded Xt/Motif application may avoid calling XtToolkitThreadInitialize() and rely on the fact that avoiding this call disables application and process locks. This behavior is generally considered errant on the part of the application, but the application developers have probably figured out other ways around the potential instability that this situation creates. The problem arises whenever VirtualGL pops up its configuration dialog (which is written using Xt.) In order to create this dialog, VirtualGL creates a new Xt thread and calls XtToolkitThreadInitialize() as it is supposed to do to guarantee thread safety. But if the application into which VGL is loaded exhibits the errant behavior described above, suddenly enabling application and process locks may cause the application to deadlock. Setting VGL_GUI_XTTHREADINIT to 0 will remove VGL’s call to XtToolkitThreadInitialize() and should thus eliminate the deadlock.In short, if you try to pop up the VirtualGL config dialog and notice that it hangs the application, try setting VGL_GUI_XTTHREADINIT to 0 . |
1 | |
VGL_INTERFRAME=0 VGL_INTERFRAME=1 |
Enable/disable interframe image comparison In Direct Mode, VGL will normally compare each image tile in the frame with the corresponding image tile in the previous frame and send only the tiles that have changed. Setting VGL_INTERFRAME to 0 disables this behavior. Normally, you shouldn’t need to disable interframe comparison except in rare situations. This setting was introduced in order to work around a specific interaction issue between VirtualGL and Pro/ENGINEER v3. See Section 15 for more information. ** This option has no effect in “Raw” Mode. ** |
Inter-frame comparison enabled | |
VGL_LOG |
Redirect the console output from the VirtualGL faker to a log file Setting this environment variable to the pathname of a log file on the VirtualGL server will cause the VirtualGL faker to redirect all of its messages (including profiling and trace output) to the specified log file rather than to stderr. |
Print all messages to stderr | |
VGL_NPROCS |
-np <# of CPUs> or -np 0 (automatically determine the optimal number of CPUs to use) |
Specify the number of CPUs to use for multi-threaded compression VirtualGL can divide the task of compressing each frame among multiple server CPUs. This might speed up the overall throughput if the compression stage of the pipeline is the primary bottleneck. The default behavior (equivalent to setting VGL_NPROCS=0 ) is to use all but one of the available CPUs, up to a maximum of 3 total. On a large multiprocessor system, the speedup is almost linear up to 3 processors, but the algorithm scales very little past that point. VirtualGL will not allow more than 4 processors total to be used for compression, nor will it allow you to assign more processors than are available in the system. ** This option has no effect in “Raw” Mode. ** |
1P system: 1 2P system: 1 3P system: 2 4P & larger: 3 |
VGL_PORT |
-p <port> |
The TCP port to use when connecting to the client ** This option has no effect in “Raw” Mode. ** |
4242 for unencrypted connections, 4243 for SSL connections |
VGL_PROFILE=0 VGL_PROFILE=1 |
-pr or +pr |
Enable/disable profiling output If enabled, this will cause the VirtualGL faker to continuously benchmark itself and periodically print out the throughput of reading back, compressing, and sending pixels to the client. See Chapter 17 for more details. |
Profiling disabled |
VGL_QUAL |
-q <1-100> |
An integer between 1 and 100 (inclusive) This setting allows you to specify the quality of the JPEG compression. Lower is faster but also grainier. The default setting should produce perceptually lossless image quality. ** This option has no effect in “Raw” Mode. ** |
95 |
VGL_READBACK=0 VGL_READBACK=1 |
Enable/disable readback On rare occasions, it might be desirable to have VirtualGL redirect OpenGL rendering from an application into a Pbuffer but not automatically read back and send the rendered pixels. Some applications have their own mechanisms for reading back the buffer, so disabling VirtualGL’s readback mechanism prevents duplication of effort. This feature was developed initially to support running ParaView in parallel using MPI. ParaView MPI normally uses MPI processes 1 through N as rendering servers, each drawing a portion of the geometry into a separate window on a separate X display. ParaView reads back these server windows and composites the pixels into the main application window, which is controlled by MPI process 0. By creating a script which passes a different value of VGL_DISPLAY and VGL_READBACK to each MPI process, it is possible to make all of the ParaView server processes render to off-screen buffers on different graphics cards while preventing VirtualGL from displaying any pixels except those generated by process 0. |
Readback enabled | |
VGL_SPOIL=0 VGL_SPOIL=1 |
-sp or +sp |
Enable/disable frame spoiling By default, VirtualGL will drop frames so as not to slow down the rendering rate of the server’s graphics engine. This should produce the best results with interactive applications, but it may be desirable to turn off frame spoiling when running benchmarks or other non-interactive applications. Turning off frame spoiling will force one frame to be read back and sent on each end-of-frame event, so that the frame rate reported by OpenGL benchmarks will accurately reflect the frame rate seen by the user. Disabling frame spoiling also prevents non-interactive applications from wasting graphics resources by rendering frames that will never be seen. With frame spoiling turned off, the 3D rendering pipeline behaves as if it is fill-rate limited to about 30 or 40 Megapixels/second, the maximum throughput of the VirtualGL system on current CPU’s. |
Spoiling enabled |
VGL_SSL=0 VGL_SSL=1 |
-s or +s |
Tunnel the VirtualGL compressed image stream inside a secure socket layer ** This option has no effect in “Raw” Mode. ** |
SSL disabled |
VGL_SUBSAMP |
-samp <411|422|444> |
411, 422, or 444 This allows you to manually specify the level of chrominance subsampling in the JPEG compressor. By default, VirtualGL uses no chrominance subsampling (AKA “4:4:4 subsampling”) when it compresses images for delivery to the client. Subsampling is premised on the fact that the human eye is more sensitive to changes in brightness than to changes in color. Since the JPEG image format uses a colorspace in which brightness (luminance) and color (chrominance) are separated into different channels, one can sample the brightness for every pixel and the color for every other pixel and produce an image which has 16 million colors but uses an average of only 16 bits per pixel instead of 24. This is called “4:2:2 subsampling”, since for every 4 pixels of luminance, there are only 2 pixels of each chrominance component. Likewise, one can sample every fourth chrominance component to produce a 16-million color image with only 12 bits per pixel. The latter is called “4:1:1 subsampling.” Subsampling decreases the amount of image data and thus increases the performance and decreases the network bandwidth usage, but subsampling can produce some visible artifacts. Subsampling artifacts are rarely observed with volume data, since it usually only contains 256 colors to begin with. But narrow, aliased lines and other sharp features on a black background will tend to produce artifacts when subsampling is enabled. The Axis Indicator from a Popular Visualization App displayed with 4:4:4, 4:2:2, and 4:1:1 subsampling (respectively): NOTE: If you select 4:1:1 subsampling, VirtualGL will in fact try to use 4:2:0 instead. 4:2:0 samples every other pixel both horizontally and vertically rather than sampling every fourth pixel horizontally. But not all JPEG codecs support 4:2:0, so 4:1:1 is used when 4:2:0 is not available. ** This option has no effect in “Raw” Mode. ** |
444 |
VGL_SYNC=0 VGL_SYNC=1 |
-sync or +sync |
Enable/disable strict 2D/3D synchronization (necessary to pass GLX conformance tests) Normally, VirtualGL’s operation is asynchronous from the point of view of the application. The application swaps the buffers or calls glFinish() or glFlush() or glXWaitGL() , and VirtualGL reads back the framebuffer and sends the pixels to the client’s display … eventually. This will work fine for the vast majority of applications, but it is not strictly conformant. Technically speaking, when an application calls glXWaitGL() or glFinish() , it is well within its rights to expect the OpenGL-rendered pixels to be immediately available in the X window. Fortunately, very few applications actually do expect this, but on rare occasions, an application may try to use XGetImage() or other X11 functions to obtain a bitmap of the pixels that were rendered by OpenGL. Enabling VGL_SYNC is a somewhat extreme measure that may be needed to get such applications to work properly. It was developed primarily as a way to pass the GLX conformance suite (conformx , specifically.) When VGL_SYNC is enabled, every call to glFinish() or glXWaitGL() will cause the contents of the server’s framebuffer to be read back and synchronously drawn into the client’s window without compression or frame spoiling. The call to glFinish() or glXWaitGL() will not return until VirtualGL has verified that the pixels have been delivered into the client’s window. As such, enabling this mode can have potentially dire effects on performance. |
Synchronization disabled |
VGL_TILESIZE |
A number between 8 and 1024 (inclusive) Normally, in Direct Mode, VirtualGL will divide an OpenGL window into tiles of 256x256 pixels, compare each tile vs. the previous frame, and only compress & send the tiles which have changed. It will also divide up the task of compressing these tiles among the available CPUs in a round robin fashion, if multi-threaded compression is enabled. There are several tradeoffs that must be considered when choosing a tile size: Smaller tile sizes:
Smaller tiles can more easily be divided up among multiple CPUs, but they compress less efficiently (and less quickly) on an individual basis. Using larger tiles can reduce traffic to the client by allowing the server to send only one frame update instead of many. But on the flip side, using larger tiles decreases the chance that a tile will be unchanged from the previous frame. Thus, the server may only send one or two packets per frame, but the cumulative size of those packets may be much larger than if a smaller tile size was used. 256x256 was chosen as the default because, in experiments, it provided the best balance between scalability and efficiency on the platforms that VirtualGL supports. ** This option has no effect in “Raw” Mode. ** |
256 | |
VGL_TRACE=0 VGL_TRACE=1 |
-tr or +tr |
Enable/disable tracing When tracing is enabled, VirtualGL will log all calls to the GLX and X11 functions it is intercepting, as well as the arguments, return values, and execution times for those functions. This is useful when diagnosing interaction problems between VirtualGL and a particular OpenGL application. |
Tracing disabled |
VGL_VERBOSE=0 VGL_VERBOSE=1 |
-v or +v |
Enable/disable verbosity When in verbose mode, VirtualGL will reveal some of the decisions it makes behind the scenes, such as which code path it is using to compress JPEG images, which type of X11 drawing it is using, etc. This can be helpful when diagnosing performance problems. |
Verbosity disabled |
VGL_X11LIB |
the location of an alternate X11 library Normally, VirtualGL loads the first X11 dynamic library that it finds in the dynamic linker path (usually /usr/lib/libX11.so.? , /usr/lib/64/libX11.so.? , /usr/X11R6/lib/libX11.so.? , or /usr/X11R6/lib64/libX11.so.? .) You can use this setting to explicitly specify another X11 dynamic library to load. Normally, you shouldn’t need to muck with this unless something doesn’t work. |
||
VGL_XVENDOR |
Return a fake X11 vendor string when the application calls XServerVendor() Some applications expect XServerVendor() to return a particular value, which the application (sometimes erroneously) uses to figure out whether it’s running locally or remotely. This setting allows you to fool such applications into thinking they’re running on a “local” X server rather than a remote connection. |
Environment Variable Name | Description | Default Value |
---|---|---|
VGL_PROFILE=0 VGL_PROFILE=1 |
Enable/disable profiling output If enabled, this will cause the VirtualGL client to continuously benchmark itself and periodically print out the throughput of decompressing and drawing pixels into the application window. See Chapter 17 for more details. |
Profiling disabled |
VGL_VERBOSE=0 VGL_VERBOSE=1 |
Enable/disable verbosity When in verbose mode, VirtualGL will reveal some of the decisions it makes behind the scenes, such as which code path it is using to decompress JPEG images, which type of X11 drawing it is using, etc. This can be helpful when diagnosing performance problems. |
Verbosity disabled |
vglclient
Command-Line Argumentsvglclient Argument |
Description | Default |
---|---|---|
-port <port number> |
Causes the client to listen for unencrypted connections on the specified TCP port | 4242 |
-sslport <port number> |
Causes the client to listen for SSL connections on the specified TCP port | 4243 |
-sslonly |
Causes the client to reject all unencrypted connections | Accept both SSL and unencrypted connections |
-nossl |
Causes the client to reject all SSL connections | Accept both SSL and unencrypted connections |
-l <log file> |
Redirect all output from the client to the specified file | Output goes to stderr |
-x |
Use X11 functions to draw pixels into the application window | Use OpenGL on Solaris/Sparc or if stereo is enabled; use X11 otherwise |
-gl |
Use OpenGL functions to draw pixels into the application window | Use OpenGL on Solaris/Sparc or if stereo is enabled; use X11 otherwise |