Emotion recognition from arbitrary view facial images

  • Authors:
  • Wenming Zheng;Hao Tang;Zhouchen Lin;Thomas S. Huang

  • Affiliations:
  • Research Center for Learning Science, Southeast University, Nanjing, China;Beckman Institute, University of Illinois at Urbana-Champaign;Visual Computing Group, Microsoft Research Asia, China;Beckman Institute, University of Illinois at Urbana-Champaign

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Emotion recognition from facial images is a very active research topic in human computer interaction (HCI). However, most of the previous approaches only focus on the frontal or nearly frontal view facial images. In contrast to the frontal/nearly-frontal view images, emotion recognition from non-frontal view or even arbitrary view facial images is much more difficult yet of more practical utility. To handle the emotion recognition problem from arbitrary view facial images, in this paper we propose a novel method based on the regional covariance matrix (RCM) representation of facial images. We also develop a new discriminant analysis theory, aiming at reducing the dimensionality of the facial feature vectors while preserving the most discriminative information, by minimizing an estimated multiclass Bayes error derived under the Gaussian mixture model (GMM). We further propose an efficient algorithm to solve the optimal discriminant vectors of the proposed discriminant analysis method. We render thousands of multi-view 2D facial images from the BU-3DFE database and conduct extensive experiments on the generated database to demonstrate the effectiveness of the proposed method. It is worth noting that our method does not require face alignment or facial landmark points localization, making it very attractive.