Week 7 Summary

SCAPE:  Stereoscopic Collaboration In Augmented and Projective Environments.

The paper introduces SCAPE a collaborative infrastructure for Augmented and Projective Environments.  Scape aims at creating a virtual environment which could be accessible by multiple people at the same time. Earlier it was really difficult to provide individuals with their respective perspectives. Scape focuses on providing individuals with environment where they not only can have a vision from their perspective but also face to face interaction to facilitate collaborative working. Scape makes use of HPMDs which can be seen as combination of HMDs and projectors. HPMD technology enables enhancement of real world with 3D computer-generated information. They also provide the capability to create an arbitrary number of individual viewpoints with non-distorted perspectives at the same time retaining face to face communication. The authors then discuss the environment created in great detail. They then move on to design and implementation. They present both, hardware as well as software sides of the design. They finally wrap it up with explaining about Aztec explorer an application that demonstrates SCAPE.

 

A Practical Multi-viewer Tabletop Autostereoscopic Display:

In the paper authors present a multi-user autostereoscopic tabletop display. The system is based on the “Random Hole Display” design. The design modifies a pattern of openings in a barrier mounted in front of a flat panel display from thin slits to a dense pattern of tiny pseudo randomly placed holes.  This design enables the users to see different sets of pixels through random holes in the screen. One problem faced by such systems is that there is a large number of pixels that are viewed by more than one user. Authors solve this problem by detecting the portion of the pixel visible from each viewpoint and based on that they assign the color to each pixel for every viewpoint. They make use of highly accurate tracking system and their rendering algorithm to fill these holes. The hardware algorithm consists of 4 passes. In the first pass the scene is rendered from each viewer’s perspective to the frame buffer. Second pass generates the point images by calculating rays from the viewpoint through the holes and onto the display. The third pass blends the point images of all views into single texture. In the fourth pass they finally diffuse the color error of each pixel of each view to the neighboring visible pixels in the same view. They authors also explain the calibrations used and finally end with proposing applications and their evaluations of the applications.

Comments are closed.