Shane’s Week 12 Summaries

Omitouch:

The omitouch is a shoulder mounted device intended to be used as an input/output device.  The goal is to be able to utilize different surfaces in a users environments as a touch screen device much like using your hand instead of a smart phone.  It consists of a few different parts.  It has a projector for displaying information to the user.  As well as a depth camera that creates a depth map in millimeters at 30 FPS. The idea is to be able to turn any surface the user can find into a interactive space.  The projector projects the interface and the user interacts with it by using his right hand(device is mounted on the left shoulder) to click or drag on the projected surface.  To track the users hand the use a template matching system on the derivative of the depth map.  They basically look for cylinders of a certain thickness and always assume that the left most point is the end of the finger.  The detect clicks by determining if the users finger come within 1-2 MM of the object that is acting as the UI.  The system is able to recognize certain objects in the users surrounding area and knows how big and where to place the interface.  The user can also  define a surface to be used as the interface but this requires that she must place and correctly size any of the elements to be used.  The system is more accurate as the distance between the surface and camera is shortened.  They developed some simple application for the system like a note taker and painting program.  They did some user testing involving 12 subjects who were able to get their clicks recognized by the system about 95 percent of the time correctly.

Proprioception

This paper is about trying to overcome the lack of haptic feedback while using a VR application.  In general virtual environment application lack the haptic feedback that users get by interacting with the real world.  To alleviate some of the problems and to increase immersion they are exploring using proprioception(or sense of one’s body) to augment how the user interacts with the environment.  After testing, they found that using laser beams or other kinds of pointing technology is more difficult than using their hands to interact with the object.  Because of this when users try select an object in the virtual space the world is scaled by a factor of distance from the user to the object.  As as the world is scaled the selected object is brought to the user’s location allowing them to manipulate the object with their hands.  Another way they use is proprioception is by using the body as mapping for storing items.  A user may place an item on a spot on their body.  This uses the user’s sense of self to help them locate elements.  The system also uses gestures as commands allowing the user to user physical interaction in the space to execute different commands and interaction by moving their body.  One of their example was being able to delete objects by tossing them over one of their shoulders.  They also use collapsible  menus to maximize the amount of screen space they can use at any one given time.  To uncollapse the menu the user simply drags it down.  Users also like to use widgets in their hands rather than have them floating in space in front of them.  They also explored with using the head position and orientation as a way to control the cursor allowing a user use their heads to drive the interfaces.

Comments are closed.