Summaries :

Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction
In virtual environment, precise manipulations of object in 3D worlds are hard mainly for three reasons. First, in VE, there is hardly haptic feedback. Hence, this is hard and very tiring for user to get precise results. Moreover, there is also a limitation about the input information. Often, inputs are given by tracking hands and head orientation and position. This is a quit restrictive way if we are looking at what is happening in the real world. Finally, sensors still only offers a limited precision. Another problem is the lack of unified framework for interaction in VE.
Authors think that the body can be used to solve those problem. Proprioception, the sense of the position and orientation of your body, can be used in every VE because this is the only solid surface that your are sure to find in it. If an object is located exactly on the position of his hand, a user can easily understand where the object is in the 3D world (direct manipulation). In the same idea, if a user can linked virtual object to sensors bound to his body, he is sure to be able to locate them (physical mnemonics). Finally, a user can also use gesture to transmit information or invoke action (gestural action). The interaction of within a user’s natural volume also provides more advantages than just proprioception. Indeed, if the arm is used, this technique allows a direct mapping between hand and objects and fine measurement of yields and angles. To avoid the user to scroll too much in the virtual world, the authors have also developed an automatic scaling system that reduces the distance between the user and unreachable objects.
OmniTouch: Wearable Multitouch Interaction Everywhere

Today, a lot of electronic mobile devices can be used. Those devices tend to be smaller and smaller. One of the problem of that miniaturization is the surface available for the user interface. Microsoft presents in this paper OmniTouch, a shoulder-worn device. It is composed of three components. A depth camera is used to render object that are distant from less than 20 cm. The position computed of any object has an error in the Z-axis of less than 5mm. The second component is a laser-projector that has for property to be wide angle and focus-free regardless the distance. Microsoft has also developed a unique method to compute the (x,y,z) coordinate of fingers without using calibration. First, a segmentation is processed to find candidate for fingers. Then, the most probable pattern is selected. Then the algorithm decides if the finger is touching or hovering. But tracking fingers is not sufficient to offer a multitouch interaction. Surfaces also have to be detected and tracked. OmiTouch provides three ways to define the interactive areas. The first one is called one size fits all. This method uses a lock point and orientation to track the surface. The second method is classification-driven method. This method involves two-steps. First, each surface of a small set is detected and classified. Then, the system automatically sizes, tracks and adapts an interface given the available surfaces. The last method is called user-specified placement. This method lets the user define the area.

Comments are closed.