Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction

Mine, Brooks, and Sequin

This paper describes a great solution for handling 3DUI in immersive virtual environments. The methods leverage our innate proprioceptive sense. This is our human ability to sense the relative location of our body parts. It is the phenomenon that makes it possible to grasp our own hands without looking. Three main types of interaction — Direct Manipulation, Physical Mnemonics, and Gestural Actions — are body relative movement modes that can be directly mapped to interaction controls.

Using proprioceptive sense can overcome several major difficulties with immersive VE, namely the lack of precision and the lack of a unifying framework. By making the body itself the interaction framework, the toolkit, and harnessing the built-in haptic feedback, motor precision, and input gesture variety of the human body, proprioceptive interfaces offer a lot of advantages that traditional UI modes lack.

On body techniques provide a physical real-world frame of reference for motion. The computer system simply provides a remapping function to extend additional information from limb positioning. This is the functional description of scaled-world grab. The extent of motion of the bent arm gives the VE user the ability to scale the size of the world using a simple gesture. To control for the magnification of the user’s hand in this mode, the hand is visually represented as a crosshairs that is not as obviously deformed by enlargement.

Physical mnemonics is the storing of objects and commands relative to the user’s body. In this category, we see pull down menus, hand-held widgets, and field of view relative mode switching. This type of interaction exploits the space we ‘carry’ with us around our body to control the context of certain movements and gestures for interface element placement and use. It also works like the toolbelt, we carry placeholder objects with us and can draw these out as necessary for interactions.

Gestural actions are body relative movements that we can use to trigger useful actions. Head-butt zoom is like leaning in and away from something, then rescaling that motion dynamically to redraw the world closer or further from our eyes. Head position becomes an additional input channel, and it is one that maps directly to our natural movement goals of shifting head forward and back to inspect or draw back from an object under inspection. Other gestural actions are Look-at menus, two handed flying, and over the shoulder deletion. Each of these take head position, hand position, or familiar gestures to accomplish a similar task in the virtual world as would in the real world. (Maybe not flying, but hey?)

OmniTouch: Wearable Multitouch Interaction Everywhere

Harrison, Benko, and Wilson

Building on the theme this week of on-body interactions, OmniTouch is a wearable projection and depth-ranging sensor that can be worn on the shoulder to create GUI elements on any surface. This can take advantage of any blank surface in the world to serve as an interaction surface. Requires no instrumentation of the environment to create ad-hoc on demand control surfaces.

It’s a primesense camera and a projector. Major differences from other attempts in this field of automatic interaction surfaces or body worn systems include: automatic tracking of the interaction surface. Users dont have to hold their hand or arm in a particular position in order to use the menus. Also, the OmniTouch system can detect multitouch inputs.

This system could be ideal for creating real-world interaction surfaces that coordinate with virtual environments. I think the subtle advantage here is that registration is a selfcontained problem. It handles its own need for an operation surface. In see through displays, this can function without being involved in the AR system as a whole.

Another cool offshoot idea is that only flatish surfaces are interesting or important. Instead of trying to instantiate 3D mixed reality, this technique shows that we can get a lot of use out of any and all available 2D surfaces to support the desired mixed reality interactions.

Comments are closed.