Week 8 Summaries

The Task Gallery: A 3D Window Manager

Task management has several component: creating, locating and bringing tasks into focus. The task gallery is designed to meet the goal of task management. This is a 3D environment in which the current task is displayed in the front wall when other tasks are displayed and the floor, cell and side walls. The scene displayed represents an art gallery because this is well-known environment and because, it is intuitive. When a user wants to switch from the current task to another, it just walk back in the gallery to discover more tasks or new room with tasks on walls. When a new view is selected, it replaced the old active view and a “ghosted” view replace the current view in its position. This allows the user to remember where the task is located.

A task is composed of different windows. Those windows are in a loose stack and an ordered stack. The loose stack look-likes the current OS display when the ordered stack look-likes the 3D defilement offered by Windows systems. To make easier the use of the system, a toolbar is also available. This item allows the user to trigger different operators. The task gallery has been tested to discover how user tends to organize their tasks. The result is people tends to organize their tasks in a 3D environment as they would do in the real environment. Indeed, most of the tasks are located on the wall and not on the cell or on the floor. The main issue with those kind of representation is to get access to a bitmap of the application result. Indeed, in normal behavior, an application would either display on screen or display nothing. For this environment, the display must be a texture to be implemented in the 3D environment.

Pre-Patterns for Designing Embodied Interactions in Handheld Augmented Reality Games

The video game industry uses more and more new kind of controller to create more natural and more intuitive experience. We can mentioned the Wii controller, the Microsoft Kinect and the Sony pad that has a gyroscope sensors (the last is not mentioned in the article but I add it to really shows that every console on the market uses new controller). Currently, Handle Augmented Reality games do not have a large market but tends to be more and more popular. This paper relates the experience of the authors in this domain. Precisely, this article is about nine pre-paterns that authors have discovered that leverage four kinds of embodied human skills. Those human skills are body awareness & skills, environment awareness & skills, naive physics and social awareness & skills.

The nine patterns are :

– device metaphor: the device can guide the user on how interact with it by looking likes a everyday object such as a bin.

-mapping control: a controller with appropriate sensors can detect user’s movement and reproduce them (or at least trigger an action) in the virtual environment.

-seamful design: handled augmented reality interfaces use computer vision. Those have to be designed to limit the area used by the user or help the user when their is a failure in the tracking.

-world consistency: when reproducing a real word, the user expects finding the “real-world” property such gravity. But designers can also decide to omit some of them to create new experience.

-personal presence: HAR interfaces can enhance the presence feeling of an user

-landmarks: landmarks help the user to oriented himself. Designers have to use them to improve user experience. (In video game, nothing is more terrible that misunderstanding where to go).

-living creatures: to increase the feeling that the virtual character is alive, it must react to real events, such as sounds or user movement.

-body constrains: when a user moves, it changes, adds or deletes constrains to other players.

-hidden information: most of the information is accessible to only a set of person. Body language or communication can be some new mechanisms to display those information to players that do not normally get them.

 

 

Comments are closed.