Sparse temporal representations for facial expression recognition

  • Authors:
  • S. W. Chew;R. Rana;P. Lucey;S. Lucey;S. Sridharan

  • Affiliations:
  • Speech Audio Image and Video Technology Laboratory, University of Technology, Queensland, Australia;Speech Audio Image and Video Technology Laboratory, University of Technology, Queensland, Australia;Disney Research, Pittsburgh;Commonwealth Science and Industrial Research Organisation (CSIRO), Australia;Speech Audio Image and Video Technology Laboratory, University of Technology, Queensland, Australia

  • Venue:
  • PSIVT'11 Proceedings of the 5th Pacific Rim conference on Advances in Image and Video Technology - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In automatic facial expression recognition, an increasing number of techniques had been proposed for in the literature that exploits the temporal nature of facial expressions. As all facial expressions are known to evolve over time, it is crucially important for a classifier to be capable of modelling their dynamics. We establish that the method of sparse representation (SR) classifiers proves to be a suitable candidate for this purpose, and subsequently propose a framework for expression dynamics to be efficiently incorporated into its current formulation. We additionally show that for the SR method to be applied effectively, then a certain threshold on image dimensionality must be enforced (unlike in facial recognition problems). Thirdly, we determined that recognition rates may be significantly influenced by the size of the projection matrix Φ. To demonstrate these, a battery of experiments had been conducted on the CK+ dataset for the recognition of the seven prototypic expressions − anger, contempt, disgust, fear, happiness, sadness and surprise − and comparisons have been made between the proposed temporal-SR against the static-SR framework and state-of-the-art support vector machine.