[Summaries Week 13] Exploiting Proprioception, OmniTouch

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction

This paper aims to allay the difficulty inherent in interaction in VEs by the ingenious use of  one’s awareness of one’s own body (proprioception) as an aid for orientation and spatial memory tasks. The authors expound the problem that motivates their study, stating that the precise manipulation of virtual objects is hard, and that this is the reason for most VR applications providing only for spatial visualization, and possessing, rudimentary, if any spatial interaction capabilities. They state that technical considerations aside, there are challenging human perception issues that need to be accounted for to allow for efficient spatial interactions in VEs, prominent among them being the lack of haptic feedback, and low (typically 6) DOFs.

The authors then put forth the use of Proprioception as a potential enabler for such interactions. They state the body-related techniques provide for a frame of reference and better control and thus, permit effortless “eye’s off” interactions. They classify their proposed techniques based on the information they exploit and mode of interaction as: Direct manipulation, Physical Mnemonics and Gestural Actions. Following this, the authors present an automatic scaling method that allows for the use of proprioception information by situating interactions to be within arm’s reach of individuals.

Subsequently, the authors present a number of techniques under the above mentioned three categories. Among these, scaled world grab, allows for direct manipulation as well as locomotion in VEs by scaling the world on each object grab and release. The authors also describe pull-down menus that leverage physical mnemonics and keep the interface clutter free by hiding virtual menus at fixed locations relative to the user’s body, and hand-held widgets that are attached to a user’s hands as opposed to objects on which the actions are to be performed. Finally, the authors describe a number of gestural actions, including the innovatively named head-butt zoom that allows for zoom in and out in VEs based on the user’s head motion.

To substantiate their claims, the authors also carry out a couple of formal user studies. The authors conclude by presenting their results for these studies, which clearly demonstrate that manipulating objects co-located with the hands is easier, and that hand-held widgets leverage physical mnemonics more effectively than widgets placed on virtual objects.

 

OmniTouch: Wearable Multitouch Interaction Everywhere

In this paper, the authors describe their novel wearable depth-sensing and projection system – OmniTouch, that allows appropriating everyday surfaces for multi-touch interactions. The authors motivate their work by conjuring visions of interactions that are enabled by OmniTouch, including interactions that combine increased visual real-estate with mobility by appropriating for touch, the body itself.

The authors describe the hardware components of their system – a short-range PrimeSense depth camera and a Microvision ShowWX+ laser pico projector. These two components are firmly fitted into a metal frame that can be worn on the shoulders – which provide for a good vantage point for the system. The authors also describe their vision algorithms that allow for multi-touch finger tracking. They employ a family of methods that allows for: finger segmentation by recognizing the cylindrical fingers from the depth-map delivered by the PrimeSense camera and finger click detection by employing a flood-fill algorithm that also fills the contact surfaces in case of a click.

The authors describe how their system can be leveraged to create on-demand projected interfaces that can permit for much richer visual interactions. Their work allows for the surface segmentation and tracking by the use of lock points. OmniTouch allows for summoning and defining interactive areas based on a surface lock point, creation of interfaces based on surface type and user specified placement of interactive areas.

The authors present a number of example applications that leverage the capabilities provided by OmniTouch, including a phone-keypad application and a map panning and zooming demo that demonstrates the multi-touch capabilities of OmniTouch. In conclusion, the authors also present the results of a user study that was conducted to gauge the efficacy of interactions enabled by OmniTouch. The studies revealed an inherent offset in the readings from OmniTouch and the user’s perception of his clicking position, which the authors handle trivially.  Otherwise, the results of the user-study are impressive, with the system achieving a 96.5% success rate in detecting finger clicks, demonstrating its viability.

 

 

Comments are closed.