Week 12 Summaries: Users

This week, we read two papers explaining the issues of designing 3DUI experiences. In “Exploring 3D Navigation”, the researchers present results of user studies testing some novel interaction methods on desktop computers using mouse and keyboard input devices, and comparing the results for different tasks of selection, travel control, navigation and inspection for large virtual environments. They evaluate the interaction methods and study a comparison of these methods on very large displays. They are primarily measuring satisfaction, immersion, and task completion time.

The interesting 3DUI modes they invented have to do with experiencing immersion properties in the environment. They support click and drag object manipulation. When an object is grabbed, one challenge is how should the object be moved, manipulated or interacted with. Direct object manipulation  moves a copy of the object into a near working area, destroys and replaces it as reoriented in its original position when done. Ghost copy allows the user to make multiple copies of the same object, position them, and modify them while seeing the impact of their adjustments from multiple angles at once. It seems to make sense that more screen area would favor the ghost copy method. To deal with occlusions in the working area, the researchers implemented techniques called inverse fog and ephemeral world compression. Inverse fog involves setting higher levels of transparency the nearer an object is to the user. This allows near (and larger) objects to be seen through so that occluded objects remain visible with a minimum effort of movement. This ‘work around’ mode is a very good idea. Ephemeral world compression involves allowing the user to shrink and magnify the entire environment at a fixed scale to support surveying the whole scene during navigation and search tasks. A similar method manipulates the angular spread of the view frustrum, akin to physically zooming a lens. The typical mode for manipulating view direction, separate from movement directions, is to map look to the mouse and move to the keyboard, as in the Rubbernecking mode. A more interesting novel method in this paper is called Possesion, as in ‘demonic possesion’ where the users view takes on the perspective of the object they select for interaction. For objects in the scene with obvious eyes or heads, this makes good sense. For other objects, less apt to anthropomorphization, the mode is challenging to accept or implement.

The super hero metaphor appears to apply again in the speed coupled flying and orbit travel modes. In this method, navigation through the space to distant locations or at high rates of speed is coupled to an adjustment in altitude. This movement makes the user feel like a superhero, I think. It makes a lot of sense that flying over the scene (especially in outdoor VEs) means avoiding problems of occlusion and collision. Addition of glide effects enhanced the immersion in the second experiment.

The most interesting upshot of this paper is the anthropomorphism, super powers of bending and mutilating, and remediating objects in the virtual reality. Super powered egocentric navigation. I imagine this is ideal for immersion in game and fantasy environments, but maybe less immersive or disorienting for users unaccustomed to video gaming metaphors.

The second paper assigned was a very thorough taxonomy and is intended to be a complete starting point for spatial input research. As distinct from desktop interactions for moving in 3D spaces, these are based on free space interaction. It seeks to organize and codify a framework for further study. Help ground researchers and designers. The researchers stress that the major considerations are perception, that is understanding vs. experiencing a virtual environment, and ergonomic details, how the interface is used or ‘couples’ to the user body.

Several methods of interaction are presented in this taxonomy. First is spatial references. These use a real world object, relative to which the user can operate in 3D. An item to ‘experience’ enhances interaction.

Next, there is a comparison of relative vs absolute gestures: users may have trouble moving in absolute space, but relative motions are much easier. As in, the pen and tablet interface.

For two handed interactions, research shows this is more efficient, but for 3DUI less likely to be disorienting as well. The additional relative motion of using two hands at once may reduce some cognitive load by employing task transfer of use of everyday tools to the computer.

Another consideration is multisensory feedback. Research shows a wider range of senses may help uses more readily perceive their environment. These may be proprioceptive sense, force feedback and auditory cues. There are good results using physical analogs such as flashlights. Perhaps other dummy objects, puppetry, or gesture recognizing tokens can help with immersion in VEs. Compare with recent work in Murray, Mazalek, and others in the Georgia Tech eTV lab and SynLab.

Other simplifications in 3DUI could be the use of physical constraints such as gridding and miniature models as both physical and software constraints. Finally, considering head tracking techniques lets interaction designers give back some of the info lost in VEs caused by projecting 3D onto a 2D surface. Headtracking enables the recovery of sense provided through parallax depth cues, and only head tracking can do this well.

When designing, consider related vs independent dimensions and map interactions appropriately. ie. translation motion is different from scrolling or color scaling tasks. Also, designers don’t have to use all the degrees of freedom available to the sensor. Reduce cognitive load by trying to eliminate extraneous degrees of freedom.

The remainder of the paper begins to explain some dominant control metaphors: eyeball in hand, scene in hand, flying vehicle (similar to the superhuman movements, possession, and ephemeral world transformations in the other paper).

Considering some other issues related to immersion and sensemaking, 3DUI design should consider recalibration: aka, picking up the ‘mouse’ in 3DUI.  Also, people tend to have a small working volume (based on handwriting practice analysis). Think about clutching, tool use, and mouse techniques as well.

True to HCI dogma, we need to use human centered design practice by learning from the 3D and spatial understanding techniques used by ‘superusers’ with excellent spatial mastery: sculptors, surgeons, radiologists, FURNITURE MOVERS, and FPS video gamers. (I added the last one, and the emphasis on moving guys is mine.)

 

Comments are closed.