3D and Appearance Modeling from Images

  • Authors:
  • Peter Sturm;Amaël Delaunoy;Pau Gargallo;Emmanuel Prados;Kuk-Jin Yoon

  • Affiliations:
  • INRIA and Laboratoire Jean Kuntzmann, Grenoble, France;INRIA and Laboratoire Jean Kuntzmann, Grenoble, France;Barcelona Media, Barcelona, Spain;INRIA and Laboratoire Jean Kuntzmann, Grenoble, France;GIST, Gwangju, South Korea

  • Venue:
  • CIARP '09 Proceedings of the 14th Iberoamerican Conference on Pattern Recognition: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper gives an overview of works done in our group on 3D and appearance modeling of objects, from images. The backbone of our approach is to use what we consider as the principled optimization criterion for this problem: to maximize photoconsistency between input images and images rendered from the estimated surface geometry and appearance. In initial works, we have derived a general solution for this, showing how to write the gradient for this cost function (a non-trivial undertaking). In subsequent works, we have applied this solution to various scenarios: recovery of textured or uniform Lambertian or non-Lambertian surfaces, under static or varying illumination and with static or varying viewpoint. Our approach can be applied to these different cases, which is possible since it naturally merges cues that are often considered separately: stereo information, shading, silhouettes. This merge naturally happens as a result of the cost function used: when rendering estimated geometry and appearance (given known lighting conditions), the resulting images automatically contain these cues and their comparison with the input images thus implicitly uses these cues simultaneously.