Texture and Shape Information Fusion for Facial Action Unit Recognition

  • Authors:
  • Irene Kotsia;Stefanos Zafeiriou;Nikolaos Nikolaidis;Ioannis Pitas

  • Affiliations:
  • -;-;-;-

  • Venue:
  • ACHI '08 Proceedings of the First International Conference on Advances in Computer-Human Interaction
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel method that fuses texture and shape information to achieve Facial Action Unit (FAU) recognition from video sequences is proposed. In order to extract the texture information, a subspace method based on Discriminant Nonnegative Matrix Factorization (DNMF) is applied on the difference images of the video sequence, calculated taking under consideration the neutral and the most expressive frame, to extract the desired classication label. The shape information consists of the deformed Candide facial grid (more specifically the grid node displacements between the neutral and the most expressive facial expression frame) that corresponds to the facial expression depicted in the video sequence. The shape information is afterwards classified using a two-class Support Vector Machine(SVM) system. The fusion of texture and shape information isperformed using Median Radial Basis Functions (MRBFs) NeuralNetworks (NNs) in order to detect the set of present FAUs.The accuracy achieved in the Cohn-Kanade database is equalto 92.1% when recognizing the 17 FAUs that are responsible for facial expression development.