Robust variational reconstruction from multiple views

  • Authors:
  • Natalia Slesareva;Thomas Bühler;Kai Uwe Hagenburg;Joachim Weickert;Andrés Bruhn;Zachi Karni;Hans-Peter Seidel

  • Affiliations:
  • Mathematical Image Analysis Group, Dept. of Mathematics and Computer Science, Saarland University, Saarbrücken, Germany;Mathematical Image Analysis Group, Dept. of Mathematics and Computer Science, Saarland University, Saarbrücken, Germany;Mathematical Image Analysis Group, Dept. of Mathematics and Computer Science, Saarland University, Saarbrücken, Germany;Mathematical Image Analysis Group, Dept. of Mathematics and Computer Science, Saarland University, Saarbrücken, Germany;Mathematical Image Analysis Group, Dept. of Mathematics and Computer Science, Saarland University, Saarbrücken, Germany;Max-Planck-Institut für Informatik, Saarbrücken, Germany;Max-Planck-Institut für Informatik, Saarbrücken, Germany

  • Venue:
  • SCIA'07 Proceedings of the 15th Scandinavian conference on Image analysis
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recovering a 3-D scene from multiple 2-D views is indispensable for many computer vision applications ranging from free viewpoint video to face recognition. Ideally the recovered depth map should be dense, piecewise smooth with fine level of details, and the recovery procedure shall be robust with respect to outliers and global illumination changes. We present a novel variational approach that satisfies these needs. Our model incorporates robust penalisation in the data term and anisotropic regularisation in the smoothness term. In order to render the data term robust with respect to global illumination changes, a gradient constancy assumption is applied to logarithmically transformed input data. Focussing on translational camera motion and considering small baseline distances between the different camera positions, we reconstruct a common disparity map that allows to track image points throughout the entire sequence. Experiments on synthetic image data demonstrate the favourable performance of our novel method.