Week 13 Summaries

OmniTouch: Wearable Multitouch Interaction Everywhere

This paper by Harrison et al is a more recent, robust (and real) implementation of the Sixth Sense tech by Pranav Mistry et al. The group has implemented the system using depth sensing technology similar to Kinect. They have also mentioned the work done by Sixth Sense and Interactive Dirt, and explain how their technology is different (and more advanced). The Omnitouch is a novel wearable shoulder-mounted system that enables interactive and multi-touch input on everyday objects. They can project interfaces on walls, desktop tables, hand held objects like books and notepads, and also people’s own bodies like the palm or the lap. They mention that for reliable input using touch, study has shown that buttons should be 2.3cm diameter. This is very much possible on the palm of a person. Thus they envision, a day when all interactions possible on a smartphone would be potentially possible on a person’s palm.

The Omnitouch system allows various touch and mouse interactions, like touch and slide, and even mouse interactions like “click” and “hover”. The previous work in the field could not detect hover and required a person wear color or IR reflective markers on their fingers. All these limitations are overcome in the Omnitouch system, since they use depth sensing, using a Kinect. For the harware, the team initally used a Kinect to prototype, but the minimum sensing distance of 50 cm made it awkward to use, since this application would be used with objects closer to the person. They later used a custom PrimseSense depth camera, which enable depth sensing to distances as low as 20cm. In combination they used a Microvision ShowWX+ laser pico-projector.

Omnitouch can allow multi finger tracking. Another interesting variation mentioned in the paper was of Multi Surface Interaction. This enables a whole new suite of ideas. They created a  prototype of a painting application on the wall where the user uses the back of their hand as a color palette. The hardware is still large and cumbersome to be used in a real application, however, it may not be too far away from a miniature implementation on a head mounted glasses, or a Google Glass-like system.

 

Moving Objects In Space: Exploiting Proprioception
In Virtual-Environment Interaction

A new framework for orienting a person in a virtual environment has been proposed in this paper based on proprioception, a person’s sense of the position and orientation of his body and limbs. They describe in detail three forms of body-relative interactions  namely, Direct manipulation, physical mnemonics and Gestural actions.

It was found that these techniques based on proprioception are more easy to adjust that techniques only on visual cues. They developed an automatic scaling mechanism to allow the users to interact instantly with objects lying at any distance as though they were within arm’s reach. Another technique tested was head orientation technique to show the view of the world at which the user was currently looking.

Direct Manipulation: The idea was that if a virtual object is located close to  the user’s hand position, the user has a good sense of the position of the object (even with eyes closed) and thus a greater sense of control.
Physical mnemonics:  Users can store virtual objects, in particular menus and widgets, relative to his body. If, controls are fixed relative to the user’s body, he can use proprioception to find the controls, as one finds his pen in his pocket. Other possibilites are placing it on the user’s body itself or another alternative of placing it out of view, behind the user’s back.
Gestural actions: Just as a user’s body sense can be used to facilitate the recall of objects, it can be used to facilitate the recall of actions, such as gestures used to invoke commands or to communicate information.

Comments are closed.