OmniTouch: Wearable Multitouch Interaction Everywhere

The OmniTouch is a system to enable graphical, interactive multi-touch input on every surfaces. In other words, it has on-the-go interactive capabilities with no calibration. The OmniTouch has three main components; a custom short-range PrimeSense depth camera, a Microvision ShowWX+ laser pico-projector, and a depth camera and projector are tethered to a desktop computer for prototyping purposes.

To detect the finger segmentation, depth map of scene was used. It helps to support absolute depth information, allowing the scene to be treated as a conventional 2D image. The users can click on a surface to trigger an action or they can click and drag to position and size in one continuous action.

The system was tested with 12 participants on body, objects held in hand, and a fixed surface. The results of the test are presented to enhance the system in the future.

 

Moving Objects In Space

The paper describes three types of body interaction in virtual environment; Direct manipulation, Physical mnemonics, and Gestural actions.

Direct manipulation is like scale-world grab in which the world is automatically scaled down to the size of user’s head every time he grabs an object. In this case, the user can bring the most remote object in the scene to his side in a single operation.

For physical mnemonics authors suggest to hide virtual menus in locations fixed to the users body. The menu would be accessible when the user grab it and place it in the scene.

Head-butt zoom was a gestural action as another way of head motion to be used in controlling interaction. It enables users to switch quickly between detailed view and global view.

Comments are closed.