Random Hole Display, SCAPE

Paper summary for week 7 by Andy

To my mind, the work described in Ye, State, and Fuchs on Random Hole Displays means that increasing pixel densities in display technology can mean big gains for 3DUI.

The RHD technology works by taking a high density display, and randomly sampling the visible pixels by obscuring the display with a noise mask. The open spots of the mask dictate which actual pixels will be visible from a particular angle (ie. a head-tracked user). Multiple users are accommodated by a multipass algorithm that smooths information about the image across the dynamically calculated mask. The result is analogous to a set of multiple shadows cast on a display (in reverse, if you can imagine that) so that the ‘shadow’ determines what proportion of the intended image is mixed into the display area’s color values.

Approximately, presenting two angles of the image to two separate viewers is a similar problem (different in some parameters) to the problem of presenting two angles to a single viewer (one for each eye) and in this way the RHD technology is capable of producing stereo viewpoints for a single viewing angle.

The paper addresses the tuning parameters and assumptions that make it possible for four users to simultaneously view a stereoscopic tabletop display.

In the SCAPE paper, we see another example of how AR technology has evolved to support multiple users working simultaneously. In this case, the technology of optical materials makes creating a personalized field of view possible for several people at once in the same scene.

The trick to this system comes out of an analysis of the work habits of collaborators. Each holds their own tools and widgets. Each person, or more than one person, is in a given workspace — either at a workbench or in a CAVE-like room.

The technology used here is optical film capable of reflecting incident light directly back at the incident angle. Placing light projectors IN the HMD itself means that the point of view of the wearer is built directly into the projection system. Then, surfaces can be created that display AR imagery that is always in the field of view of the wearer.

By exploiting an environment of retroreflective surfaces, head tracking, and generalizable widgets, the authors are able to implement several important features for collaboration: shared object spaces, permission based visibility, direct manipulation techniques, and a common virtual work area that can be projected onto locally distant environments with high frame rates and selective resolution of visibility and detail.

Retroreflective materials work well on walls, work surfaces, and hand held objects making general activities equally well supported on a variety of surfaces.

Comments are closed.