Estimating face-pose consistency based on synthetic view space

  • Authors:
  • Qigan Gao;A. K.C. Wong;Shang-Hua Wang

  • Affiliations:
  • Fac. of Comput. Sci., Dalhousie Univ., Halifax, NS;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

The visual appearance of an object in space is an image configuration projected from a subset of connected faces of the object. It is believed that face perception and face integration play a key role in object recognition in human vision. This paper presents a novel approach for calculating viewpoint consistency for three-dimensional (3D) object recognition, which utilizes the perceptual models of face grouping and face integration. In the approach, faces are used as perceptual entities in accordance with the visual perception of shape constancy and face-pose consistency. To accommodate the perceptual knowledge of face visibility of objects, a synthetic view space (SVS) is developed. SVS is an abstractive perceptual space which partitions and synthesizes the conventional metric view sphere into a synthetic view box in which only a very limited set of synthetic views (s-views) need to be considered in estimating face-pose consistency. The s-views are structurally organized in a network, the view-connectivity net (VCN), which describes all the possible connections and constraints of the s-views in SVS. VCN provides a meaningful mechanism in pruning the search space of SVS during estimating face-pose consistency. The method has been successfully used for recognizing a class of industrial parts