Week 8 Summaries

Pre-Patterns for Designing Embodied Interactions in Handheld Augmented Reality Games

Handheld Augmented Reality is a growing field and HAR Gaming especially is attracting a lot of attention. But there have not been an official studies or researches that formalize the techniques used to create HAR. In this paper, the authors propose a set of 9 pre-patterns which are essentially design principles. They give a summary of the successful current knowledge in the discipline for the entire community of HAR game designer. The various patterns:

–          Device Metaphors: The handheld device has to be an identifiable metaphor from everyday life. Like it is a magnifying glass in the game Bug Juice which requires the player to remain a certain distance from the ants to burn them.

–          Control Mapping: The interface as a game controller should support a set of intuitive physical actions that can be mapped to actions in the game.

–          Seamful Design: Use constraints to limit the range of actions and thus overcome dodgy tracking technology.

–          World Consistency: In a hybrid world experience, players expect to experience real world principles like the laws of physics etc.

–          Landmarks: These are used by the player to navigate through the game space.

–          Living Creatures: To convince people that it is one world and not a hybrid, the digital game creatures should follow real world rules.

–          Personal Presence: The player’s personal presence can be enhanced by using the HAR interface.

–          Body Constraints: Design games such that the relative position amongst players is taken into account and they can move freely while immersed yet not bumping into each other.

–          Hidden Information: Players use the body movement and social skills like communication to infer hidden information, creating suspense and tension.

The task gallery: a 3D window manager

This is an attempt at redesigning the 2D windowed desktop experience into a 3D one to manage their open windows. In order to achieve this, the authors presented all the windows in a gallery (more like an art gallery) and provided navigation controls.

The gallery has a linear layout thus making it easy to remember sequentially and also making it hard to get lost in the 3D space. All the windows can be displayed on the floor, ceiling and walls. But a psychological study insisted that it seems more natural to place them on the walls rather than the floor or ceiling. The main task is displayed right in front of the user in the gallery and the user can switch tasks by clicking.

They use animation to create a sense of spatial movement combined with sound that re-enforces the movement as coming closer or moving away. A loose stack and an ordered stack allow to select a window, when window is selected it is bring close. When a window further away is clicked on, it brings it forward. They also try to come up with alternatives to normal menus by using the tool space. The navigation is through simple controls using mouse and keyboard and supports basic movements like move forward and back, looking in different directions. The system also supports showing multiple windows together.

The authors performed a number of usability tests and psychological steps to get feedback and information about their system. These provided new design inputs.

Question: How did the 3D interface help improve the experience of a normal desktop? The reason I ask is, a lot of the screen seems to be full of clutter and unimportant information. Is it really useful to have 3D UI for Windows? (Although I think purely from a 3D UI perspective this seems interesting.)

Comments are closed.