Week 13 Summaries

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction

Manipulation in immersive virtual environments is difficult partly because users must do without the haptic contact with real objects they rely on in the real world to orient themselves and their manipulanda. The paper describes proprioception, a person’s sense of the position and orientation of his body and limbs. There are three forms of body-relative interaction: Direct manipulation which involves use of body sense to help control manipulation. Physical mnemonics—ways to store/recall information relative to the body and gestural actions which uses body-relative action to issue command.

It was found that body-relative interaction techniques which exploited proprioceptive feedback are more effective than techniques relying solely on visual information. Such body-relative interaction techniques provide:

  • a physical real-world frame of reference in which to work
  • a more direct and precise sense of control
  • “eyes off” interaction (the user doesn’t have to constantly watch what he’s doing)

An automatic scaling mechanism was developed to allow the users to interact instantly with objects lying at any distance as though they were within arm’s reach. Head orientation technique was used to look at that part of the world appear at which the user was currently looking.

Set of physical mnemonics, i.e the storing of virtual objects and controls relative to the user’s body have also been proposed. This includes the pull-down menus, hand held widgets, relative mode switching etc

Gestural actions like head-butt zoom, look-at menus etc . They use a combination of head movements, hand movements and gestures in the virtual world to achieve respective tasks.

 

OmniTouch: Wearable Multitouch Interaction Everywhere

OmniTouch (a Microsoft device) is a wearable depth-sensing and projection system that enables users to interact with normal surfaces which we come across in our day-to-day life. OmniTouch provides capabilities similar to that of a mouse or touch screen.

The highlights of this system are multitouch input on arbitrary surfaces, both flat and irregular, with no calibration or training. It is worn on the shoulder and consists of a Prime Sense depth sensing camera (Microsoft Kinect) and a Pico projector.

OmniTouch makes use of finger segmentation which yields the spatial location of the X, Y and Z of the finger locations. A secondary process is used to determine whether these fingers -specifically the tips are in contact with a surface (i.e., a “click”)

There were many small simple examples built using the OmniTouch system. These include painting applications on walls by using the left hand as the color pallet and the other hand to paint on the surface projection. Others include a projected keypad, wristwatch, application switcher, cell phone dial-pad, slider to unlock application and many more projected GUIs.

Omnitouch was able to track multiple objects with its field of view to support interaction at various levels. The orientation can also be used to determine the whether the surface is Public or Private which was pretty novel for me.

Comments are closed.