Silhouette and stereo fusion for 3D object modeling

  • Authors:
  • Carlos Hernández Esteban;Francis Schmitt

  • Affiliations:
  • Signal and Image Processing Department, CNRS UMR 5141, Ecole Nationale Supérieure des Télécommunications, France;Signal and Image Processing Department, CNRS UMR 5141, Ecole Nationale Supérieure des Télécommunications, France

  • Venue:
  • Computer Vision and Image Understanding - Model-based and image-based 3D scene representation for interactive visalization
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the contour generators of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.