Week 13 Summaries

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction

This paper describes the use of manipulation techniques in virtual environment exploiting the concept of proprioception, a person’s sense of the position and orientation of his body and limbs. There are three forms of body-relative interaction namely direct manipulation, physical mnemonics and gestural action.

In general manipulation techniques in virtual worlds is difficult due to the lack of precision, lack of haptic feedback, limited input information, ways to interact with are limited and hard. The paper purposes alternatives to solve these problems.

The paper describes a method of reaching objects which are very far by changing the scaling factor when the user grabs or releases the objects. The use of physical mnemonics to store/ recall information by the use of something like pull-down menu which can hide virtual menus in locations fixed relative to the user’s body, just above his current field of view. The use of hand-held widgets to control objects that are far: just like the remote control.

The paper describes how intuitive gestures can be augmented in powerful way. Gesture actions can be used to issue commands by using body-relative actions. Head-butt zoom, look-at menus, two-handed flying, over the shoulder deletion are all examples of how convenient gestures can be in virtual environment.

OmniTouch: Wearable Multitouch Interaction Everywhere

 OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces.

The system employs a depth-sensing camera, similar to the Microsoft Kinect, to track the user’s fingers on everyday surfaces. The users can tap; drag their fingers, just like they would with touchscreens found on smartphones or tablet computers

The OmniTouch device includes a short-range depth camera and laser pico-projector and is mounted on a user’s shoulder. The projector can superimpose keyboards, keypads and other controls onto any surface, automatically adjusting for the surface’s shape and orientation to minimize distortion of the projected images.

The palm of the hand could be used as a phone keypad, or as a tablet for jotting down brief notes. Maps projected onto a wall could be panned and zoomed with the same finger motions that work with a conventional multitouch screen.

The system was user-tested to evaluate the three classes of surface which were body, hand-held surfaces and fixed surfaces in the environment. The four test surfaces were hand, arm, pad and wall.

Despite the proof-of concept being fairly successful in the things it sets out to achieve there are several improvements to work on which have to be kept in mind. These include making the system smaller more compact that what it currently is. Also, being able to project on surfaces that are non-2D and non-rectilinear is another major concern.

 

Comments are closed.