Dynamic 3-D Scene Analysis Through Synthesis Feedback Control
IEEE Transactions on Pattern Analysis and Machine Intelligence
Eyes from Eyes: New Cameras for Structure from Motion
OMNIVIS '02 Proceedings of the Third Workshop on Omnidirectional Vision
Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Visual Modeling with a Hand-Held Camera
International Journal of Computer Vision
Scene Modelling, Recognition and Tracking with Invariant Image Features
ISMAR '04 Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality
Randomized Trees for Real-Time Keypoint Recognition
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Structure from Motion with Wide Circular Field of View Cameras
IEEE Transactions on Pattern Analysis and Machine Intelligence
Online camera pose estimation in partially known and dynamic scenes
ISMAR '06 Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality
Robust feature representation for efficient camera registration
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
Lens model selection for visual tracking
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
An analysis-by-synthesis approach to rope condition monitoring
ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part II
Dense 3D point cloud generation from multiple high-resolution spherical images
VAST'11 Proceedings of the 12th International conference on Virtual Reality, Archaeology and Cultural Heritage
Enhanced anomaly detection in wire ropes by combining structure and appearance
Pattern Recognition Letters
Hi-index | 0.00 |
We propose a model-based camera pose estimation approach, which makes use of GPU-assisted analysis-by-synthesis methods on a very wide field of view (e.g. fish-eye) camera. After an initial registration, the synthesis part of the tracking is performed on graphics hardware, which simulates internal and external parameters of the camera, this way minimizing lens and perspective differences between a model view and a real camera image. We show how such a model is automatically created from a scene and analyze the sensitivity of the tracking to the model accuracy, in particular the case when we represent free-form surfaces by planar patches. We also examine accuracy and show on synthetic and on real data that the system does not suffer from drift accumulation. The wide field of view of the camera and the subdivision of our reference model into many textured free-form surfaces make the system robust against moving persons and other occlusions within the environment and provide a camera pose estimate in a fixed and known coordinate system.