Summaries for week 13

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction

One of the aims in virtual environments is to provide to users natural ways to interact with the interfaces. However, applications of such technologies are seldom despite their importance. First because of the hardness of manipulating virtual objects, but also because of in non-adapted metaphors.

Several observations showed that proprioception is a very effective additional feedback. It is especially useful for direct manipulations, physical mnemonics and gestural actions. First, in direct manipulations, the scaled-world grab scales the world each time the user grabs an object. Second, for what concerns physical mnemonics, users can show and hide menus thanks to buttons, they can also use hand-held widgets to remotely control other objects, and the field of view can be adapted. Finally, the gestural actions are head-butts to zoom, orienting the head to look at menus, flying with hands and deleting an object by throwing it over their shoulder.

Two experiments have been led. The first one aimed at showing the differences between manipulating co-located objects with one’s hand and objects at a distance, by making the subjects align objects with targets. The second experience was set up to compare the interaction with a widget held in one’s hand and with a widget floating in space, by measuring the accuracy by which the users replace their hand at their first location. Both experiments showed the efficiency of proprioception.

Future work will be to find other additional means, to compensate the lack of haptic feedback and enhance the efficiency of the users in such environments.

 

OmniTouch: Wearable Multitouch Interaction Everywhere

This paper presents OmniTouch, a system allowing projection and interaction on many surfaces, such as a hand or a sheet of paper. Indeed, many mobile computers or smartphones aim at providing facilities to communicate and to enjoy new technologies everywhere, but displaying information or buttons on larger surfaces would enlarge the possibilities.

For what concerns the hardware, OmniTouch is made of three principal components, a depth camera, a tiny projector mounted on a frame located on a shoulder. For the software, the main things to deal with are the finger segmentation and detecting clicks, especially to provide multitouch finger tracking. To use a certain surface to project on, the surface has first to be segmented and correctly tracked. Then, both the camera and the projector have to be calibrated, and finally interactive areas have to be defined for the user to interact with what is displayed on this surface.

This system leads to many applications. For instance, it can be used to emulate a phone keypad, to switch between applications or to manage the visibility and organization of data. Naturally, tests have been led on several surfaces: on the body (hand or arm), on objects held by the subjects and on fixed surfaces. The results show that walls are is the best surface to project and to interact with, since the buttons need to be smaller than in other cases to reach enough efficiency. Also, the closer, the better.

For the future, the main work will be to extend this system to 3D environments and therefore to project on non-planar surfaces. Other questions involve the use of the body, especially of other’s bodies.

Comments are closed.