Co-Location Displays

Co-Location Displays

When making a visuo-haptic system it is often beneficial to to enable the users to feel the objects where they see them. This can be achieved in several ways, but the most common is to use a tilted monitor and a mirror, so that a reflected image plane is created in the empty space under the mirror. In this space a haptic device is placed so that when the manipulandum (handle of the device) is co-located with this reflected image of the virtual scene. To enhance the effect stereoscopic glasses are used, and sometimes also head-tracking.

Click read more to, well, read more, and to comment

Conceptually it is simple, just hang a monitor at a desired angle (usually 45’ or 60’), use a front-surface or semi-transparent mirror and make sure the whole construction is robust. When using 3D shutter glasses like Nvidia 3D Vision, the polarization changes when light reflects in the mirror and thus a half-wave or quarter-wave retarder has to be used in the ray path, preferably before the mirror (Forsslund & Flodin, 2009). If you don’t want to make such a setup yourself, we have made one that we are offering for sale (contact us directly if you are interested already before we have put it in our upcoming webshop!).

Left: our co-location system, here with a haptic device on loan from KTH. To the right a rendering of the same system showing the reflected image plane when tilting the monitor 60’. Both the mirror and the monitor are height-adjustable.

The benefit from co-location has been studied by for example Olsson et al (2012) who showed that tasks were completed faster in a co-located vs a non-co-located system. One thing to note with a mirrored system is that the viewpoint and direction in a real-world reference frame (e.g. the table) changes. The user is now looking half-way down instead of straight as in a regular monitor setup. This can be cognitive overcome (humans are good at adjusting within some angular limits) or the designer need to think through what is desired.

In this photo the virtual environment consisting of a large green box with small movable cubes is seen from (almost) the user perspective, illustrating that the green box appears going into the screen with the walls being parallel to the reflected image. If desired, the virtual viewpoint’s position and orientation can be changed so that the walls are perpendicular to the physical table.

Furthermore, to correctly co-locate the haptic device with the image, a calibration software can be used. This is particularly useful when the haptic device is arbitrarily placed in the workspace or it’s manipulandum origin is unknown. This can be achieved by using a semi-transparent window:

The reflected image plane is seen in the semi-transparent window. The physical haptic device’s manipulandum and the user’s hand is thus seen fused with the 3D image. Using a software application called Calib which is part of the Candy package for H3D developed by Karl-Johan Lundin Palmerius, one move the visual avatar in the space, lock it, then move the physical manipulandum to match the visual:


After a couple of samples a transformation matrix for the device to world coordinates can be derived.

We will discuss optimal stereo rendering for co-location displays in a later post, but feel free to comment if you have any questions or feedback!

Leave a Reply