Week 5 Summary

Kinect Fusion

Kinect fusion is a new software to backend for the existing Kinect camera produced by Microsoft.  At it’s base level the software is a sophisticated 3D mapping tool.  If uses the dual camera in the Kinect system to build 3D model on anything the camera sees in real time.  The backbone of this technology is CUDA.  The technology could be a huge boast to AR for it’s ability to quickly model a 3D scene which allows for users to physically act with virtual objects.  The paper describes the interaction between a hand and some virtual particles how collide and respond with the users arm.  In addition to being able to simulate the physics of virtual items, the system also has some rendering capabilities that are based on ray cast/tracing.  Another important feature the system has it that it requires no infrastructure to run, allowing it to be used anywhere.  The camera doesn’t even need to be mounted. Actually the camera is designed to be use unmounted which normally means the data is less accurate because slight movements in the camera position or orientation.  In this case the system uses the data captured like this to improve it’s reconstruction data.

Comments are closed.