Summaries Week 13

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction

Working in a virtual environment with lasers and other pointing and manipulation tools is not as convincing as they were imagined. Pointing methods is tiring and hard to use. This is because of lack of haptic feedback. So the authors propose to use the user himself as the tool for interaction and take advantage of proprioception (your position and orientation etc.) in a virtual world. They discuss 3 main body-relative interactions. The first one, Direct Manipulation, is a way to use the body to control the manipulation. Once the user selects an object, the world is scaled down and the user feels like he is holding the object in his hand which is very natural to handle. When he releases the object, the world scaled back to original size again. The second, Physical mnemonics, is a way to store and recall information relative to the body. So for example, to pull menus, they can just reach down just above their field of view and pulls it down. Similarly you can place other widgets, buttons and switches. The third, Gestural Actions, use body-relative actions to issue commands. Like you can create a frame with your hands and fingers and then use it to zoom-in or zoom-out.  A semi-transparent rectangle conveys that he is in head-butt (zooming) mode. They performed various tests to analyze the effectiveness of these techniques.

OmniTouch: Wearable Multitouch Interaction Everywhere

Omnitouch is one of the results of Microsoft Research labs. It is a wearable depth-sensing and projection system that enables interactive multi-touch applications on various surfaces. These surfaces can be the user’s hands or legs. It supports all the mouse and touchscreen interactions like selecting and clicking etc. The wearable is just a shoulder device with the projection and depth-sensing capabilities. These are connected to a computer (for prototyping). This makes up the hardware setup. For gesture/event detection, they explain in depth but in a nutshell, the user is assumed to have worn the device on the left shoulder and uses the right hand for interaction. They flood fill the finger from its mid-point to the tip and this is used for depth mapping. So the left most point is always assumed to be end of the finger. Since the camera is used to understand the scene, it adjusts the UI according to the surface. So if it was projecting on a large table or on the hand of the person, it detects that and adjusts accordingly. In the later part of the paper they discuss a bunch of applications like a surface keyboard, a watch, a menu, a coloring application and a map application. The applications demonstrate the various kinds of interactions and the surfaces that can be used. They went on to analyze the various gestures on the different kinds of surfaces and found the wall to be the best and that hand as the surface for project works best at a close distance.

Comments are closed.