KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera
Proceedings of the 24th annual ACM symposium on User interface software and technology
Integrating human and computer vision with EEG toward the control of a prosthetic arm
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Walk&Sketch: create floor plans with an RGB-D camera
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Integrating the physical environment into mobile remote collaboration
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Real-Time camera tracking: when is high frame-rate best?
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VII
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
Monocular visual odometry and dense 3d reconstruction for on-road vehicles
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume 2
Achieving robust alignment for outdoor mixed reality using 3D range data
Proceedings of the 18th ACM symposium on Virtual reality software and technology
Integrating approximate depth data into dense image correspondence estimation
Proceedings of the 9th European Conference on Visual Media Production
A memory-efficient kinectfusion using octree
CVM'12 Proceedings of the First international conference on Computational Visual Media
Octree-based fusion for realtime 3D reconstruction
Graphical Models
Scalable real-time volumetric surface reconstruction
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Reconstructing sequential patterns without knowing image correspondences
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part IV
3D from looking: using wearable gaze tracking for hands-free and feedback-free object modelling
Proceedings of the 2013 International Symposium on Wearable Computers
Homography based Monocular Dense Reconstruction for a Mobile Robot
Proceedings of Conference on Advances In Robotics
Proceedings of the 15th ACM on International conference on multimodal interaction
Quick and dirty: streamlined 3D scanning in archaeology
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Multi-resolution surfel maps for efficient dense 3D modeling and tracking
Journal of Visual Communication and Image Representation
Live RGB-D camera tracking for television production studios
Journal of Visual Communication and Image Representation
International Journal of Computer Vision
Fast vision-based scene modeling for augmented reality in unprepared man-made environments
Journal of Ambient Intelligence and Smart Environments - Design and Deployment of Intelligent Environments
Hi-index | 0.00 |
DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.