Discriminative Face Alignment

  • Authors:
  • Xiaoming Liu

  • Affiliations:
  • GE Global Research, Schenectady

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.16

Visualization

Abstract

This paper proposes a discriminative framework for efficiently aligning images. Although conventional Active Appearance Models (AAMs)-based approaches have achieved some success, they suffer from the generalization problem, i.e., how to align any image with a generic model. We treat the iterative image alignment problem as a process of maximizing the score of a trained two-class classifier that is able to distinguish correct alignment (positive class) from incorrect alignment (negative class). During the modeling stage, given a set of images with ground truth landmarks, we train a conventional Point Distribution Model (PDM) and a boosting-based classifier, which acts as an appearance model. When tested on an image with the initial landmark locations, the proposed algorithm iteratively updates the shape parameters of the PDM via the gradient ascent method such that the classification score of the warped image is maximized. We use the term Boosted Appearance Models (BAMs) to refer to the learned shape and appearance models, as well as our specific alignment method. The proposed framework is applied to the face alignment problem. Using extensive experimentation, we show that, compared to the AAM-based approach, this framework greatly improves the robustness, accuracy, and efficiency of face alignment by a large margin, especially for unseen data.