Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
An Introduction to the Kalman Filter
An Introduction to the Kalman Filter
Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Visual Modeling with a Hand-Held Camera
International Journal of Computer Vision
Monocular model-based 3D tracking of rigid objects
Foundations and Trends® in Computer Graphics and Vision
Lateral and depth calibration of PMD-Distance sensors
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part II
Photoconsistent Relative Pose Estimation between a PMD 2D3D-Camera and Multiple Intensity Cameras
Proceedings of the 30th DAGM symposium on Pattern Recognition
Calibration of focal length and 3D pose based on the reflectance and depth image of a planar object
International Journal of Intelligent Systems Technologies and Applications
Pose estimation and map building with a Time-Of-Flight-camera for robot navigation
International Journal of Intelligent Systems Technologies and Applications
A combined approach for estimating patchlets from PMD depth images and stereo intensity images
Proceedings of the 29th DAGM conference on Pattern recognition
Hi-index | 0.00 |
Tracking of a camera pose in all 6 degrees of freedom is a task with many applications in 3D-imaging as i.e. augmentation or robot navigation. Structure from motion is a well known approach for this task, with several well known restrictions. These are namely the scale ambiguity of the calculated relative pose and the need of a certain camera movement (preferably lateral) to initiate the tracking. In the last few years time-of-flight imaging sensors were developed that allow the measuring of metric depth over a whole region with a frame rate similar to a standard CCD-camera. In this work a camera rig consisting of a standard 2D CCD camera and a time-of-flight 3D camera is used. Structure from motion is calculated on the 2D image, aided by the depth measurement from the time-of-flight camera to overcome the restrictions named above. It is shown how the additional 3D-information can be used to improve the accuracy of the camera pose estimation.