IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 2001 conference on Virtual reality, archeology, and cultural heritage
Model-Free Augmented Reality by Virtual Visual Servoing
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Modelling Dynamic Scenes by Registering Multi-View Image Sequences
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
International Journal of Computer Vision
Structure from motion for scenes without features
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Variational shape and reflectance estimation under changing light and viewpoints
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Carved visual hulls for image-based modeling
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Differential geometric consistency extends stereo to curved surfaces
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part III
Variational principles, surface evolution, PDEs, level set methods, and the stereo problem
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Recovering a 3-D scene from multiple 2-D views is indispensable for many computer vision applications ranging from free viewpoint video to face recognition. Ideally the recovered depth map should be dense, piecewise smooth with fine level of details, and the recovery procedure shall be robust with respect to outliers and global illumination changes. We present a novel variational approach that satisfies these needs. Our model incorporates robust penalisation in the data term and anisotropic regularisation in the smoothness term. In order to render the data term robust with respect to global illumination changes, a gradient constancy assumption is applied to logarithmically transformed input data. Focussing on translational camera motion and considering small baseline distances between the different camera positions, we reconstruct a common disparity map that allows to track image points throughout the entire sequence. Experiments on synthetic image data demonstrate the favourable performance of our novel method.