Image ratio features for facial expression recognition application

  • Authors:
  • Mingli Song;Dacheng Tao;Zicheng Liu;Xuelong Li;Mengchu Zhou

  • Affiliations:
  • Microsoft Visual Perception Laboratory, Zhejiang University, Hangzhou, China;School of Computer Engineering, Nanyang Technological University, Singapore;Microsoft Research, Redmond, WA;State Key Laboratory of Transient Optics and Photonics, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, China;Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on game theory
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.