Week 13 Summary

Moving Objects In Space

This paper tends to solve the problem about sense of touch in immersive virtual environments. It is important because users cannot feel the virtual world even though they can see and hear it. In order to overcome the challenges, the authors determine to use proprioception to build three kinds of interaction, which are direct manipulation, physical mnemonics and gesture actions. They argue that working within users’ arm reach can be effective way. They develop a automatic scaling mechanism so that users can interact with objects located at any distance.

Direct manipulation is mainly about scaled-world grab for manipulation. When users grab an object, the world will be automatically scaled down. Once they release the object, the world will be scaled back. In this way users can easily control the most remote objects in the virtual world without many efforts. They can also change their own location by grabbing an object in the direction they want to move.

For Physical Mnemonics, they try to deal with the pull-down menus. The menus should neither be difficult to find nor occlude the scene. Their solution is to enable users to hide the menu above their current view. Whenever they need to use it, they interact with in using their hands or some other interaction ways.

Several gesture actions are introduced in the paper. They develop head-butt zoom to help users change their current view by using a screen-aligned rectangle in front of his face. Head orientation can be used to control the cursor on the menu to choose the desired item. Two hand flying can effective help users control locomotion. Over-the-shoulder deletion can be used to get rid of virtual objects.

Finally they did an experiment to test their system. The tasks are target shapes and hand held docking shapes. 18 participants participate and they have proved that their method is effective and preferred by users.

Question: How does the system solve the learnability problem since it has so many gestures and interaction ways?

OmniTouch: Wearable Multitouch Interaction Everywhere

OmniTouch is a sensing and projection system which can help users to use their body surface as an interface. And the interface can also be transiently appropriated from body surface to other interactive area.

The hardware they use consists of three components. A depth camera is used to track the touch. A pico-projector is for projecting graphic interface. Both of the two components are connected to a computer for prototyping purpose.

Later they introduce the techniques they use for multitouch finger tracking and finger click detection. Some other problems are discussed, i.e. what kind of surfaces can be used as interface and how large the interface should be. Three principles are defined here, that is one size fits all, classification-driven placement and user-specified placement.

Several example applications are given in this paper, like how users can do the interaction conveniently,  how many surfaces can be used for the interaction and how metadata is used to make the system more context sensitive.

Finally a user study is carried out to test the click detection, click accuracy and drag accuracy of the system. The four control groups are hand, arm, pad and wall. The final result shows that interaction on the wall seems to be most accurate, and that hand at a average distance works best, while pad at a close distance functions well.

Question: Will the change of skin shape affects the accuracy of finger click?

 

Comments are closed.