Background estimation from non-time sequence images

  • Authors:
  • Miguel Granados;Hans-Peter Seidel;Hendrik P. A. Lensch

  • Affiliations:
  • MPI Informatik;MPI Informatik;MPI Informatik

  • Venue:
  • GI '08 Proceedings of graphics interface 2008
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address the problem of reconstructing the background of a scene from a set of photographs featuring several occluding objects. We assume that the photographs are obtained from the same viewpoint and under similar illumination conditions. Our approach is to define the background as a composite of the input photographs. Each possible composite is assigned a cost, and the resulting cost function is minimized. We penalize deviations from the following two model assumptions: background objects are stationary, and background objects are more likely to appear across the photographs. We approximate object stationariness using a motion boundary consistency term, and object likelihood using probability density estimates. The penalties are combined using an entropy-based weighting function. Furthermore, we constraint the solution space in order to avoid composites that cut through objects. The cost function is minimized using graph cuts, and the final result is composed using gradient domain fusion. We demonstrate the application of our method to the recovering of clean, unoccluded shots of crowded public places, as well as to the removal of ghosting artifacts in the reconstruction of high dynamic range images from multi-exposure sequences. Our contribution is the definition of an automatic method for consistent background estimation from multiple exposures featuring occluders, and its application to the problem of ghost removal in high dynamic range image reconstruction.