Progress Report Two: AR Based Orienteering

1. Target

1.1 UI Perfection and Navigation System Improvement

This is the follow-up work of Progress Report 1. Previously we didn’t pay too much attention to the usability associated with the layout. Our main effort was put on how to functionally fulfill our system. The user interface was simply implemented for debug use. In this session, we carefully re-designed the layout with aesthetic sense and implemented some handy UI operation to better assist player’s navigation needs.

The new UI uses animation to dynamically indicate the current progress, which is proportional to the distance from player to the current control point. One the top of the window, several disks are shown with respect to each control point. The checked control point will be reflected as green disk, the unchecked control point will be presented as gray disk and the current current processing control point will be shown as empty ring that needs to be filled in center. The fullness is depends on the player-control point distance. If the player taps on the ring, an enlarged detailed image will be drawn with the distance shown in the center of the ring. Tap again will make this image disappear.

The campus map is also improved to better accommodate usability. Refer to the smart phone user manual, we implemented our map to be movable and scaleable with the same finger gesture: touch and move your finger will make the map traslate with your movement; two fingers’ relative movement will evoke the pinch zoom event to scale the map.

1.2 Design Tasks at Control Points

Our target about control points tasks in this phase is to implement “Outline Match” and “Fight Monsters”.

In the last report, we planned to implement two tasks at control points. One is “Collecting Pieces of Information”, and the other is “Fight Monsters”. However later we realised that the techniques used in “Collecting Pieces of Information” are exactly the same with what we used previously in the task of “Find a specific item”. So we gave up this one and tried to think about more creative and interesting games. This is how “Outline Match” came into birth.

We described “Fight Monsters” in previous reports. Here I’d like to introduce our new game. In “Outline Match”, the players will be given an outline of a specific item, such as buildings and doors, on their phone screen. They know where it is. Only when they track it at a specific position to match the item and the outline, can they get some virtual information, which is the clue to next control point.

1.3 Make a simple plan for Orienteering

This work is a preparation for next stage. We need to be clear about what we have now and how we organize them in the orienteering. Details are introduced in Section 3 of this report. The plan can be changed in accordance with our work in the final stage.

2. How the Goal is Met

2.1 Layout Re-design

A css file is used to divide the page into severl divisions.

The ‘header’ division appears on the top of the page and contains all the small disks. Each disk represents a control point and is implemented in a css ‘circle’ class. The number of circle divisions is decided by nControlPoint  for are arranged within a modified and

  • tags. The modified tags make disks appear in a single row and are distributed with even horizontal space. There is a global variable current, which indicates the current index of control point. In the initial function and every time when player register to a control point, the current is updated. Every circle division with index less than current will change its background to green disk; every circle division with index greater than current will change its background to gray disk; the circle division with index equal to current ($(#’c’+current)) will be set with a ring as its background. For the current circle division ($(#’c’+current)), a child node is append to fill part of the ring. The proportion of filling is updated every frame with respect to the player-controlPoint distance. Also, a tap listener ‘touchstart’ is added to $(#’c’+current)).

    The ‘mainPart‘ division holds the enlarged processing disk, compass and map and it is below the ‘header’ division. When the ‘touchstart’ event get fired, we check if the enlared processing disk exists. If it doesn’t and the map is not on, we create an enlarged processing disk division and add it to the mainPart division; a text division is also created and put in the center of the disk to indicate the distance. If the enlarged processing disk is already shown, we remove this node from mainPart to retrieve the camera view. Similarly, if the device is placed vertical, we remove the ‘map’ and ‘compass’ division from ‘mainPart’; otherwise, we add them to show the map and compass.

    2.2 Compass

    We implement the compass function by using a compass image and updating its orientation using the ‘deviceorientation’ event. The compass will show up when user lays flat the device, which is implemented using the same algorithm as the map.

    2.3 Interactive Map

    Based on our previous implementation, we implemented the interactive orienteering map by using touch events in JavaScript. Map image is drawn on a 2d canvas of a div. ‘touchstart’ event is used to save the position where each touch started. In the ‘touchmove’ event, is the number of touches is one, we adjust the image offset to create the move image interaction. When the number of touches is two (2 fingers), we change the image size based on the distance between two fingers.

    2.4 Outline Match

    The work starts from picture processing. We took a picture for a door on the second floor of TSRB and added outline for it with Photoshop. The opacity of background is set to 20% so that the outline is very obvious and users can see through the image to track the item with Argon. The item in camera and the outline can match only when the user is standing at a specific position with right angle and distance. I will write down the relative coordinates of the tracked image. (Camera is the origin). Whenever player comes to the same position, the outline and the item will match, and the relative coordinates will be close to what I wrote down. In this condition, I add the virtual information into the canvas.

    2.5 Fight Monsters

    In this little game, monsters can appear on participants’ phone screen at a specific control point. Users need to shoot down the all the enemies with their phone to get the information of next task. When our system detects a user get the control point that contain this little game, we create a geoObject at user’s current geolocation and add 20 monsters to that geoObject. These monsters’ initial positions are randomly generated on a hemispherical surface. We generate a random direction for each monster, so they can move around the hemispherical surface. To make the game more interesting, each monster has a probability to change its moving direction.

    In our design, user can kill a monster by clicking the screen when a monster is  under the crosshair in the middle of the screen.  To implement this, we need to get the camera direction to measure whether a monster is under the crosshair. We have tried a lot of way but we still have not figure out how to get the camera direction from the rotation matrix of the device. After we know that, we can finish the rest implementation quickly.

    3. Orienteering Plan

    Currently we have 3 tasks associated with 3 control point.

    The first control point is set to be the map board in front of TSRB. The map embedded in that board is processed as an image target. Player need to find the map board using the distance indicator, campus map and compass. After he finds it and successfully register the map target using argon. The system will mark this control point as checked and give the player the second control point’s information.

    Our hint for the second control point is based on a half transparent picture. The player needs to find the place where the picture is taken and point the camera to exactly the same direction to match the camera image to the hint image. Once the system recognize the camera image as a match of the hint image, the second control point will be marked as checked and the last task is unlocked.

    The last task is about a shooting game. When players get to the third control point, there will be a certain number of monsters virtually around them. The goal is to kill all the monsters with a shooting aim at the center of the device. After clearing the monsters, this task is complete.

    4. Videos

    Layout: http://www.youtube.com/watch?v=phXruzrki4M

    Outline Match: http://www.youtube.com/watch?v=DLsA1YzEli4

    Fight Monster: http://www.youtube.com/watch?v=TXlrJWHlBz0

    5. Work Plan in Phase Three

    Date

    Schedule

    Note

    Apr 4

    Complete the design of “Fight Monster” game

    Make the user interface work better

    Meet at 3 pm

    Apr 11

    Combine different parts together

    Make the whole system prepared for a demo

    Meet at 3 pm

    Apr 17

    Apr 18

    Get through the orienteering with the system

    Write final report and prepare for presentation

    Meet at 3 pm

    Meet at 3 pm

     

    6. Group Members

    Bo Pang

    Xueyun Zhu

    Xuwen Xie

    Comments are closed.