week 13 summaries

OmniTouch: Wearable Multitouch Interaction Everywhere

The paper discusses a novel wearable interactive system introduced by Microsoft. The system tries to leverage the ubiquitous access to information, its manipulation and sharing especially through mobile and cloud devices. However, the most prevalent interfaces available have limited screen real estate and modes of interaction. Omnitouch is an interactive projection system that allows users to make use of one’s body and nearby physical objects (books, notepad) and environment (wall, tables), as interactive interfaces. The system essentially consists of a depth camera which provides a 320×240 depth map at 30 FPS. Objects as close as 20cm can be imaged by this sensor. The other essential component is the Microvision ShowWX+ laser pico-projector. This projector has the important property of wide angle, focus-free projection of graphical elements regardless of depth (i.e., distance from projector).  These components are integrated to a shoulder worn system.  The system supports multi touch interactions through tracking multiple fingers on arbitrary surfaces, both flat and irregular, with no calibration or training. The algorithm resolves the X, Y and Z position of fingers, and whether they are touching or hovering over a surface, simulating mouse and touch input devices.  The system if evolved to a more compact form factor could be really useful for day to day computing tasks requiring considerable field of view, wherein user won’t have to carry a laptop or desktop with them every time.

Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction

The paper talks about the existing limitations in immersive virtual environments w.r.t object manipulation mainly due to limited haptic and acoustic feedback with real objects, inaccurate tracking in some systems and lack of a consistent and unifying framework for interaction (like desktop metaphor for 2D object manipulation). The paper proposes a unified framework for virtual-environment interaction based on proprioception, a person’s sense of the position and orientation of his body and limbs. Studies show that body relative interaction techniques (employing proprioception) are more effective than relying solely on visual information. The three ways discussed in paper while using such body-interactive techniques are:

Direct Manipulation – if a virtual object is scaled about user’s head when he tries to grab it so it’s located directly at user’s hand, user has a good sense of position and thus greater control.

Physical Mnemonics – if the user has access to specific menus and widgets in the virtual space (to store and retrieve virtual objects), through pull down, hand held menus such that user is aware where the menus are w.r.t to his position and orientation. The menus might be at fixed position or move relative to user’s movement so that are easily accessible at all times

Gestural actions – gestures provide intuitive and efficient mechanics for interaction and object manipulation. The user’s body sense and movement in forms of gestures can be used to invoke commands or to communicate information in the virtual environment.

Comments are closed.