A New Sense for Depth of Field
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition Letters
An Investigation of Methods for Determining Depth from Focus
IEEE Transactions on Pattern Analysis and Machine Intelligence
Accurate Recovery of Three-Dimensional Shape from Image Focus
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recovering Affine Motion and Defocus Blur Simultaneously
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robot Vision
Bundle Adjustment - A Modern Synthesis
ICCV '99 Proceedings of the International Workshop on Vision Algorithms: Theory and Practice
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
SMBV '01 Proceedings of the IEEE Workshop on Stereo and Multi-Baseline Vision (SMBV'01)
Accurate chequerboard corner localisation for camera calibration
Pattern Recognition Letters
Hi-index | 0.00 |
We propose a method for combining geometric and real-aperture methods for monocular 3D reconstruction of static scenes at absolute scales. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from Bundle Adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about the structure of the scene. Evaluating the algorithm on real-world data we demonstrate that it yields typical relative errors between 2 and 3 percent. Possible applications of our approach are self-localisation and mapping for mobile robotic systems and pose estimation in industrial machine vision.