Locality-Constrained active appearance model

  • Authors:
  • Xiaowei Zhao;Shiguang Shan;Xiujuan Chai;Xilin Chen

  • Affiliations:
  • Key Lab. of Intelligent Information Processing, Chinese Academy of Sciences (CAS), China,Institute of Computing Technology, CAS, Beijing, China,University of Chinese Academy of Sciences, Beijing, ...;Key Lab. of Intelligent Information Processing, Chinese Academy of Sciences (CAS), China,Institute of Computing Technology, CAS, Beijing, China;Key Lab. of Intelligent Information Processing, Chinese Academy of Sciences (CAS), China,Institute of Computing Technology, CAS, Beijing, China;Key Lab. of Intelligent Information Processing, Chinese Academy of Sciences (CAS), China,Institute of Computing Technology, CAS, Beijing, China

  • Venue:
  • ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although the conventional Active Appearance Model (AAM) has achieved some success for face alignment, it still suffers from the generalization problem when be applied to unseen subjects and images. In this paper, a novel Locality-constraint AAM (LC-AAM) algorithm is proposed to tackle the generalization problem of AAM. Theoretically, the proposed LC-AAM is a fast approximation for a sparsity-regularized AAM problem, where sparse representation is exploited for non-linear face modeling. Specifically, for an input image, its K-nearest neighbors are selected as the shape and appearance bases, which are adaptively fitted to the input image by solving a constrained AAM-like fitting problem. Essentially, the effectiveness of our LC-AAM algorithm comes from learning a strong localized shape and appearance prior for the input facial image through exploiting its K-similar patterns. To validate the effectiveness of our algorithm, comprehensive experiments are conducted on two publicly available face databases. Experimental results demonstrate that our method greatly outperforms the original AAM method and its variants. In addition, our method is better than the state-of-the-art face alignment methods and generalizes well to unseen subjects and images.