Fitting Parameterized Three-Dimensional Models to Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient Region Tracking With Parametric Models of Geometry and Illumination
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple view geometry in computer vision
Multiple view geometry in computer vision
Real-Time Visual Tracking of Complex Structures
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hyperplane Approximation for Template Matching
IEEE Transactions on Pattern Analysis and Machine Intelligence
Constrained Structure and Motion From Multiple Uncalibrated Views of a Piecewise Planar Scene
International Journal of Computer Vision
Projective registration with difference decomposition
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Autocalibration and the absolute quadric
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Lucas-Kanade 20 Years On: A Unifying Framework
International Journal of Computer Vision
A comparison of viewing geometries for augmented reality
SCIA'03 Proceedings of the 13th Scandinavian conference on Image analysis
Real-time combined 2D+3D active appearance models
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Robust online appearance models for visual tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Landmark Real-Time Recognition and Positioning for Pedestrian Navigation
CIARP '09 Proceedings of the 14th Iberoamerican Conference on Pattern Recognition: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
3D real-time positioning for autonomous navigation using a nine-point landmark
Pattern Recognition
Estimating scene flow using an interconnected patch surface model with belief-propagation inference
Computer Vision and Image Understanding
Hi-index | 0.00 |
We present a tracking method where full camera position and orientation is tracked from intensity differences in a video sequence. The camera pose is calculated based on 3D planes, and hence does not depend on point correspondences. The plane based formulation also allows additional constraints to be naturally added, e.g., perpendicularity between walls, floor and ceiling surfaces, co-planarity of wall surfaces etc. A particular feature of our method is that the full 3D pose change is directly computed from temporal image differences without making a commitment to a particular intermediate (e.g., 2D feature) representation. We experimentally compared our method with regular 2D SSD tracking and found it more robust and stable. This is due to 3D consistency being enforced even in the low level registration of image regions. This yields better results than first computing (and hence committing to) 2D image features and then from these compute 3D pose.