3D shape constraint for facial feature localization using probabilistic-like output

  • Authors:
  • Longbin Chen;Lei Zhang;Hongjiang Zhang;Mohamed Abdel-Mottaleb

  • Affiliations:
  • ECE dept, University of Miami, Coral Gables, FL;Microsoft Research Asia, Beijing, China;Microsoft Research Asia, Beijing, China;ECE dept, University of Miami, Coral Gables, FL

  • Venue:
  • FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a method to automatically locate facial feature points under large variations in pose, illumination and facial expressions. First we propose a method to calculate probabilistic-like output for each pixel of image. This probabilistic-like output describes the possibility of the pixel to be the center of specified object. A Gaussian Mixture Model is used to approximate the distribution of probabilistic-like output. The centers of these Gaussians are assigned with a probabilistic-like measure and they are considered as candidate feature points. There might be one or more candidate feature points in each facial region. A 3D model of facial feature points is built to enforce constraints on the localization results of feature points. Compared with Active Shape Model (ASM) and its variant methods, our method could accommodate larger variations in pose, lighting and face expressions. Moreover, it is less sensitive to initialization errors, accurate, and fast. It takes a computer with P4 CPU about 10ms to locate the five feature points (two eye centers, two mouth corners and nose tip). The feature localization accuracy is comparable with the accuracy of manually labeled features and it is robust to noise (glasses, beards). Experiments on FERET gallery and PIE are reported in this paper as well.