[week 7 summaries]

Scape: Supporting Stereoscopic Collaboration in Augmented and Projective Environments

This papers talks about a system that supports immersive and interactive virtual reality with multiple users. These augmented environment mainly contains two parts: the microscene on the workbench and macroscene on the walls.

Microscene

The microscene is projected on the workbench coated with a material called retro-reflective film. When the users are equipped with HMPD and look towards surfaces coated with retro-reflective film, they can both perceive the surrounding real world environment and virtual objects with correct occlusion cues. A miniature 3D model of a city is projected on the workbench for experiment. A magnifier is provide for a more detail look.

Macroscene

The 4-walled experiment room serves as a CAVE-like system. Because the retro-reflective material works better in small incident angle, the corners of the room are all round. The films are all deliberately applied to each wall and corner.

Both the mircoscene and macroscene are shown with respect to the user’s perspective. Every user is wearing a HMPD to keep track of their position and orientation, and the corresponding data is computed based on it.

Functionality

In this system, the user can switch interest between the microscene and macroscene. I think the macroscene does most part of the immersion, while the microscene is responsible for the interaction. For the interaction of miniature on the workbench, several devices are used for tracking. In the virtual ‘tour’, users can transport from one site directly to another site by manipulating their physical IDs on the workbench . Also, they can get others’ location by checking their unique physical ID. When they are touring, the walls surrounding presents the walk-through view based on each user’s location. The immersive room representation helps to perceive the environment the way the microscene can’t.

In all, this is an interesting system with space of extension. I am curious about how they calibrate between different users as they use so many components here. Besides, since the user can move forward and backward in some distance (~meters), how to track the distance accurately so the update of surroundings will not give the user an unrealistic result.

 

A Practical Multi-viewer Tabletop Autostereoscopic Display

This paper discussed a tabletop display that supports both multiple viewers and autostereoscopy. Autostereoscopy refers to methods that can display stereoscopic images without extra devices like headgear or glasses. One method to achieve it is using barriers to block part of the image from the left eye and right eye respectively, which makes the images perceived by both eyes have a slightly difference to generate the stereoscopic effect.

Application in this paper takes advantage of a film with pseudo-randomly placed holes as barrier.

In the single viewer case, from a specific viewing position, a certain area of pixels is visible through each hole. The position and shape of projection through the hole varies as the viewpoint changes, thus the projected area is not always aligned with pixels. The algorithm rendered all the pixels fell into the projected area (even partially) with the same color.

When multiple viewers present, the pixels of the image can be classified into to categories: pixels that are only visible to one viewer and pixels that are visible to more than one viewer. The pixels in the second category need to be took care of. The authors come up with an idea: for those pixels visible for multiple viewers, say pixel A, render it with a reconciled color; then for those viewers who can see A, say viewer 1, the visible pixels near A from viewer 1’s perspective (not those neighboring pixels geometrically adjacent A) will be diffused and blended from A. This algorithm is aimed at smoothing the result, because the reconciled color of A is ‘discrepant’ to some extend from viewer 1’s perspective.

The associate GPU implementation and calibration method are also described in this paper.

The result shows the tabletop display works well for multi-viewers with independent viewpoints. Also, with a pre-defined single-view image as standard, the evaluation indicates that the new method has better performance to achieve a single-view-like result for each viewer with less noise and cross talk.

I guess the black dots in the figures shown in the paper are introduced by low display resolution. They mentioned they will investigate higher resolution display in their future work, which I think will resolve this black dot problem. I am wondering, with more pixels presenting, the density of the holes must increase accordingly. Also, the increase of number of viewers will ask for more holes to generate more convincing image. Those aspects will increase the conflicting pixels and introduce more discrepancy. But the holes need to be constrained to under a certain number to maintain the tolerance. I am wondering the way to efficiently address this problem.

 

 

Comments are closed.