Week 7 summaries

SCAPE:  Stereoscopic Collaboration In Augmented and Projective Environments.

SCAPE basically provides multiple users with a shared workspace so that they can concurrently observe and interact with the 3d virtual environment while face to face cooperation among local participants is preserved.  This allows the users to dynamically switch focus between shared workspace and interpersonal communication space. It allows users to view the task from their individual perspective using HMPD’s that provide projective displays. The advantage of using HMPD is that it provides perspective correct stereo images for each user, has a larger field of view (FOV), uses light weight and low distortion optics and provides correct occlusion of virtual objects by real objects.

The system includes an interactive workbench which is basically a retro-reflective workbench with multiple head tracked HMPD’s and multimodal interaction devices such as wireless DataGloves. This allows multiple users to view and manipulate a 3D dataset superimposed on its physical counterpart placed on the bench from their individual perspective i.e provides users with outside-in workbench views. The key advantage of this configuration is that if two users point to the same part of the dataset their fingers shall touch. SCAPE also created a four-walled CAVE like environment that provides users with inside-out life-size walk-through views at a much lower cost than that of building a CAVE. However  HMPD’s suffer from the light passing problem and hence the environment should be dimly lighted which makes reading difficult and limits viewing distance that in turn restricts the size of the room.

The system employs retro-reflective materials on the display as this makes the user perception of the image shape and location independent of the retro-reflective screen shape and location. However retro-reflection is only dominant within +- 40 degree entrance angles so concave shapes can improve image brightness. They thus used a cylindrical display for the interactive workbench and round corners instead of square corners for the multiwall room display. Although retro-reflective materials provide non-distorted perspectives the major problem is that these materials must be deliberately applied to physical surfaces for the application limiting applications portability.

They also developed a magnifier widget to let a user examine detailed views of the virtual data on the workbench. The magnifier is a basically a handheld device coated with retro-reflective film and has a motion tracker attached.  Each user could optionally associate themselves with a graphical avatar to convey presence to offsite collaborators and assist interaction. The system also limited the accessibility of certain data and devices by constraining their ownership.

Overall the system seems a powerful collaboration tool but one question I would like to get an answer to is that were they able to support interactive remote collaboration which was listed as future work.

A Practical Multi-viewer Tabletop Autostereoscopic Display:

In the paper authors present the first practical multi-viewer full-color autostereo display, supporting tabletop application using a novel calibration method integrated with the viewer tracking system. The system is based on the “Random Hole Display” design that modifies a pattern of openings in a barrier mounted in front of a flat panel display from thin slits to a dense pattern of tiny pseudo randomly placed holes.  This design enables the users to see different sets of pixels through random holes in the screen. The key advantage of this is that it allows users to have their own stereo perception independent of others without having to wear any kind of special glasses as long as the viewer’s eyes are tracked accurately.

Hardware-accelerated rendering algorithm for minimizing noise and optimizing quality was their key contribution. The hardware algorithm consists of 4 passes. In the first pass the scene is rendered from each viewer’s perspective to the frame buffer. Second pass generates the point images by calculating rays from the viewpoint through the holes and onto the display. All the point images of all views were blended into single texture in the third pass. They then diffuse the color error of each pixel of each view to the neighboring visible pixels in the same view in the fourth pass.

They developed several applications like 3D reconstruction of a city and a room-design application that showed good results. However there were certain drawbacks associated with the system as well such as degradation of the image quality with addition of viewers. Also, for commercial applications it seems quite expensive.

 

Comments are closed.