Statistical approach to shape from shading: Reconstruction of three-dimensional face surfaces from single two-dimensional images

  • Authors:
  • Joseph J. Atick;Paul A. Griffin;A. Norman Redlich

  • Affiliations:
  • Computational Neuroscience Laboratory, The Rockefeller University, 1230 York Avenue, New York, NY 10021-6399 USA;Computational Neuroscience Laboratory, The Rockefeller University, 1230 York Avenue, New York, NY 10021-6399 USA;Computational Neuroscience Laboratory, The Rockefeller University, 1230 York Avenue, New York, NY 10021-6399 USA

  • Venue:
  • Neural Computation
  • Year:
  • 1996

Quantified Score

Hi-index 0.01

Visualization

Abstract

The human visual system is proficient in perceiving three-dimensional shape from the shading patterns in a two-dimensional image. How it does this is not well understood and continues to be a question of fundamental and practical interest. In this paper we present a new quantitative approach to shape-from-shading that may provide some answers. We suggest that the brain, through evolution or prior experience, has discovered that objects can be classified into lower-dimensional object-classes as to their shape. Extraction of shape from shading is then equivalent to the much simpler problem of parameter estimation in a low-dimensional space. We carry out this proposal for an important class of three-dimensional (3D) objects: human heads. From an ensemble of several hundred laser-scanned 3D heads, we use principal component analysis to derive a low-dimensional parameterization of head shape space. An algorithm for solving shape-from-shading using this representation is presented. It works well even on real images where it is able to recover the 3D surface for a given person, maintaining facial detail and identity, from a single 2D image of his face. This algorithm has applications in face recognition and animation.