Integrating a mixed-feature model and multiclass support vector machine for facial expression recognition

  • Authors:
  • Daw-Tung Lin;De-Cheng Pan

  • Affiliations:
  • (Correspd. E-mail: dalton@mail.ntpu.edu.tw) Department of Computer Science and Information Engineering, National Taipei University, 151, University Rd., San-Shia, Taipei 237, Taiwan;Institute of Communication Engineering, National Taipei University, 151, University Rd., San-Shia, Taipei 237, Taiwan

  • Venue:
  • Integrated Computer-Aided Engineering
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent investigations on human-computer interaction (HCI) have incorporated users' behavior and intension into interface design. Automatic facial expression analysis can indicate a new modality for the HCI field. Thus, automatic recognition system of facial expression has become increasingly significant in recent years. This study reveals the advantages of the proposed mixed-feature model and presents the capability of identifying human facial expressions from static images. The subsequent framework is a multistage discrimination model based on global appearance features extracted from two-dimensional principal component analysis (2DPCA), and local texture represented by local binary pattern (LBP). Moreover, the weighted combination of 2DPCA and LBP features is input to the decision directed acyclic graph (DDAG) based support vector machine (SVM) classifier, and performs identification among several prototypic facial expressions. Extensive experiments are performed using the four benchmark databases most commonly cited in the literature: Yale, JAFFE, NimStim and Cohn-Kanade. The experimental results indicate that the proposed mixed-feature model is feasible and outperforms the single-feature model. Analytical results of this study reveal that the proposed method is more accurate than other alternative schemes in the same database.