Multi-view Occlusion Reasoning for Probabilistic Silhouette-Based Dynamic Scene Reconstruction

  • Authors:
  • Li Guan;Jean-Sébastien Franco;Marc Pollefeys

  • Affiliations:
  • UNC-Chapel Hill, Chapel Hill, USA;LaBRI--INRIA Sud-Ouest, University of Bordeaux, Talence Cedex, France;UNC-Chapel Hill, Chapel Hill, USA and ETH-Zürich, Zürich, Switzerland

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present an algorithm to probabilistically estimate object shapes in a 3D dynamic scene using their silhouette information derived from multiple geometrically calibrated video camcorders. The scene is represented by a 3D volume. Every object in the scene is associated with a distinctive label to represent its existence at every voxel location. The label links together automatically-learned view-specific appearance models of the respective object, so as to avoid the photometric calibration of the cameras. Generative probabilistic sensor models can be derived by analyzing the dependencies between the sensor observations and object labels. Bayesian reasoning is then applied to achieve robust reconstruction against real-world environment challenges, such as lighting variations, changing background etc. Our main contribution is to explicitly model the visual occlusion process and show: (1) static objects (such as trees or lamp posts), as parts of the pre-learned background model, can be automatically recovered as a byproduct of the inference; (2) ambiguities due to inter-occlusion between multiple dynamic objects can be alleviated, and the final reconstruction quality is drastically improved. Several indoor and outdoor real-world datasets are evaluated to verify our framework.