Assignment 1: Plants V.S. Zombies

In this assignment I tried to simulate a simple process of Plants V.S. Zombies. The program is not complex. It is just a test on whether it is possible to migrate the game onto Argon. I used its game cover as a reference image. When the device finds the target, it will display the playing field and a monster. The monster will move ahead when the camera gets close to the image target.

I have two markers. The first one is for plant. When this marker is detected, a sunflower will appear on the field and start shooting the monster. The second one is for plane. It will directly take away the monster. The player can win with any one of the two markers.

Step by step instructions:

  1. Load the URL : http://www.prism.gatech.edu/~xxie37/basic.html
  2. Track the image in the PDF document So you will get a playing field and a monster
  3. Make the camera get close to the tracked image, you will find the monster moving
  4. Put the first marker on the playing field, a flower will come out to help you
  5. It takes two bullets to kill the monster (first way to win)
  6. When the monster gets to the midpoint, move your camera farther to make it stop
  7. Put the second marker on the playing field, a plane will appear on the ground
  8. Move the second marker to make the plane get close to monster and take it away (second way to win)

Link to Video:  

Requirements of the assignment:

1. Display non-trivial 3D content (HTML and WebGL) relevant to your experience, in the space of the image target.  (3/10)

I got the models from OurBricks and added them into trackedObject so that they would appear in the space of the image target.

2. Handle tracking gracefully, in particular when the image target or a marker is lost for a small amount of time due to transient occlusion or computer vision errors.  Feedback and content should reflect such losses appropriately. (1/10)

I can handle the tracking. When the camera loses its target, I use console.log to output a sentence “Track the image or marker please”. When the target is found again, I output “Target Found”. The only problem is that I don’t know how to display these words as objects because I don’t know how to get the coordinate of the camera.

3. Have the content change in compelling ways based on the movement of the camera and it’s relationship to the image target (2/10)

When the camera gets close to the image target, the monster moves. When the camera gets far from the target, the monster stops.

4. Have content on fixed marker affected by trigger marker(s). (1/10)

The monster will be killed by the plant after being shot twice.

5. Have non-trivial 3D content on interactor marker(s). (1/10)

There is a plane model on the interactor marker.

6. Have content on the image target and the interactor marker(s) react to relative movement or location of the interactor relative to the image target. (2/10)

User should move the plane towards the monster in order to kill it.

Problem:

  1. It is really a bad experience to run Argon2 on iPad due to its large-size camera.
  2. The plane can only work when the monster is in the middle of the playing field because of some calculation problem.
  3. I cannot display “lose of target” as an object. I can only output it on text board.

Comments are closed.