Week 2: Summary

Merging Virtual Objects with the Real World

Seeing Ultrasound Imagery within the Patient

Michael Bajura, Henry Fuchs and Ryutarou Ohbuchi

The authors intend to design the “ultimate” system which acquires and displays 3D volume data in real-time. They focused their research in 3 areas: 1) algorithm for acquiring and rendering data, 2) creating a working virtual environment, and 3) recovering structural information. The proposed Incremental Volume Rendering renders a model reconstructed by sampling the target function at irregular times and reconstructs the 3D volume from these series of time stamps. Spatial and temporal reconstruction is based on auto regressive moving average. Shading and ray sampling is only done for the voxels close to the incoming data. Ray caching makes the whole process efficient. The virtual environment is created by overlaying multiple 2D ultrasound images on real-world images. Image is acquired by the ultra-sound along with its position and orientation. These are also gathered for the HMD. A 3D rendering is computed using this info and mixed with the real world using a TV camera mounted on the HMD. The system requires “transducer transformation” and “camera transformation” for calibration. After testing it in the lab on water tanks and dolls, the first live experiment was on a 38 week pregnant volunteer. The results were exciting but showed the system still required a lot of work. It was mainly limited by the technology available leading to system lag in image overlays, lack of depth information for creating the pit that didn’t appear to float, poor tracking system range and stability especially in tracking volumes, poor HMD resolutions and display engines. They tried solving most of these with tricks but the technologies need to evolve for the system to get better.

QUESTION: There was mention of a lot of noise in ultrasound images, how does that impact the 3D model reconstructed?

 

Designing Interactive Theme Park Rides

Michael Macedonia, Lawrence Rosenblum

The authors share how they created the 5 minute interactive theme park ride The Pirate. The intention was to create a balance between letting guests control their adventure and making sure each adventure was a great one. To do this they had to steer them out of dull places and pacing plus controlling the adventure to reach their climax. For the former, they used weenies which are 3 large cues attracting attention, using guide ships, sneak attacks and finally waterspouts which were basically force-fields which just flipped the ship around without people noticing it for those who did not fall for the other 3 clues. Pacing was controlled with the help of Jolly Rodger the Ghost Pirate who was the starting and ending point of the ride. In the climax you either win or lose to him but losing is more rewarding to keep people happy. Intuitive UI was crucial as there is no time. They used the subtext of being a pirate to set the people in the right frame of mind and just had a wheel for steering and cannons for shooting. They bend reality to make it easier like using blue cannon balls or messing with physics of the ball or speed of the ship. Used 3D sound and tactile speakers, a convincing virtual world and motion base to fully immerse the players. They reused a lot of Disney’s existing technology and built the system with an iterative approach using guest testing intensively. An interpreted scripting language allowed them to modify and test the system while it was running. This made sure the game balanced. Since people never go alone to a theme park, making this game dependent on a group was important and so it worked well.

QUESTION: If there was a sword fighting sequence to be incorporated into the climax, how would you do that? Use a robot, or magical swords that can send out sharp air waves that can cut through the enemy or a trick finish that would completely avoid a confrontation and involve cutting say a rope that drops a huge sack on top Mr. Rodgers?

Comments are closed.