week 13 summaries

OmniTouch: Wearable Multitouch Interaction Everywhere

This paper deals with a shoulder-worn omnitouch system developed by the authors.

This system allows to turn everyday surfaces into a graphical, interactive, multitouch input. It comports 3 components: a short range PrimeSense depth camera, a Microvision ShowWX+ laser pico-projector, mounted on a form-fitting metal frame worn on the shoulders.

To identifying finger, they use depth map of the scene and compute the depth derivative in the X- and Y-axes, and look for vertical slices of cylinder objects. Candidate slices must be in the range of human fingers. Group fingers are proximate into contiguous path, in this approach they assume a right-handed and the left most finger is then the fingertip.

The finger segmentation process give the location of the fingers and a flood filling method determine whether or not the finger is clicking.

A process allows to choose the appropriate surface for projection avoiding small or to inclined surface and to enable authoring the projector and camera are calibrated by finding the extrinsic coordinate of the projector parameters.

A main issue is how keep the interface on the surface. The authors come up with 3 approaches.

-One size fits all which use a surface’s lock point and orientation but this do not adapt the size of the interface.

-Classification-driven Placement, determine the appropriate nearby location for projecting the interface, but this process is not scalable.

-User-Specified Placement, let the user specified the appropriate location of the interface.

The authors provide some applications of this system like multi surface interaction, dragging and painting etc…

This paper show how it can be convenient and possible to use touch input on everyday surfaces, and the different experiment provide show us that this technology give enough good results for several tasks.

 

 

 

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction

This paper discuss the interaction with object in virtual environment. The VR system can really enhance the effectiveness of traditionally computer interface in providing a more suitable and intuitive interface. But a lot of VR application have not gone beyond the research laboratory because these applications exploit the view but not the interaction.

Several factors like the lack of haptic feedback, the limitation of input information and precision make conception of such a system very hard.

More over there is no unifying framework for the development of object interaction.

This paper provide a new framework of proprioception which use the position and orientation of the user body to get more information.

This technique provide better results than those based solely on the view. The user can take advantage of direct manipulation, put control at a fixed position relative to his body and gestural actions to facilitate the recall of actions.

The authors developed a mechanism to keep object whithin arm’s reach and take advantages of all the properties provide by proprioception. They use method which scale down the world when the user want to reach a far object. A scale world grab technique can be more convenient by scaling down the world for each grab task and re scale it after. This technique can be extended to let user virtually move in the environment by sacaling down, grab object and rescale the environment, the user is now where the object is located.

To ease the use of menu, the user can be followed by a menu which keep the same distance from him. For convenience this menu can be hidden and pull down when the user need it.

Gestural action can also be used to explore the environment more intuitively like the head butt zoom which zoom on the object depending the inclination of the head, the look at menu where you select what you are looking, the two-handed flying where the user specify the speed and the orientation of flying thanks to his arms and over the shoulder deletion which keep the deleted object over the shoulder and allow the user to recover them.

Two experiments show that user significantly takes advantages of the hand held widgets and the proximity of objects.

Comments are closed.