A Method for Registration of 3-D Shapes
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
A volumetric method for building complex models from range images
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Magnetic augmented reality: virtual objects in your space
Proceedings of the International Working Conference on Advanced Visual Interfaces
Around device interaction for multiscale navigation
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Physically interactive tabletop augmented reality using the Kinect
Proceedings of the 27th Conference on Image and Vision Computing New Zealand
KinectArms: a toolkit for capturing and displaying arm embodiments in distributed tabletop groupware
Proceedings of the 2013 conference on Computer supported cooperative work
Ground truth design principles: an overview
Proceedings of the International Workshop on Video and Image Ground Truth in Computer Vision Applications
Hi-index | 0.00 |
We present KinectFusion, a system that takes live depth data from a moving Kinect camera and in real-time creates high-quality, geometrically accurate, 3D models. Our system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3D model of the whole room and its contents within seconds. Even small motions, caused for example by camera shake, lead to new viewpoints of the scene and thus refinements of the 3D model, similar to the effect of image super-resolution. As the camera is moved closer to objects in the scene more detail can be added to the acquired 3D model.