Ruge’s week 13 summaries

OmniTouch: Wearable Multitouch Interaction Everywhere

The Omnitouch paper discussed a prototype technology that utilized depth sensing cameras combined with small form projectors to create small usable interfaces in the real world.   The first component, the depth sensor was used to reliably determine the location of the users hands, fingers, pointers, and surfaces in the near environment.  The second component was a small project that sat along side the depth camera and was used to project the interfaces onto the near environment scanned by the camera.

When used in conjunction the projector can be re calibrated based on the location, and orientation of the surface it is designed to be seen on.  For example, if it was to be projected on a table it could adapt and pan differently than if it was to be projected on a tablet.   Since the depth sensor used can read the environment at 30 frames per second the examples were even able to use the users hands as and body as possible surfaces.

One of the major questions with many AR applications and mobile computing in general is the input devices that will be used…new solutions that allow dynamic input devices without the need for hardware or objects to exist in the physical world will be necessarily to make using some mobile applications socially acceptable.  Its obvious that we can’e walk around with a large kinect on our shoulder, but smaller depth sensors built into products like Google glass could be quite useful.

 

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction

Moving Objects in Space describes tactics used to manipulate a virtual environment and provide virtual presence using aspects of humans existing sense of environmental response.  The exact term is proprioception.  The premise of the paper is that a key problem with many virtual environments is the  lack of haptic response.  They propose in using aspects of the human body to attempt to compensate for that missing piece and provide some aspect of the missing response.  Examples include techniques to make more objects ‘in arms reach’ even though they may be further away.  They introduced widgets and tools that seemed familiar to manipulate the environment to make the user feel like he/she was ‘touching’ objects.  They also used head orientation to make the world respond more to the users gaze rather than just use it for ‘looking’.

As with any User Interface I feel that although careful research can be helpful all these systems simply require much testing.  And careful manipulation can turn even a failed idea into a fantastic interface.  I think the important thing for most studies into new ways to navigate and use Virtual Environments is that they are repeated, over and over.  Like we have read in a few papers, small changes in appearances such as animation speeds, or how human a character looks can make a  difference.

Comments are closed.