Comparing feature-based metrics for facial dynamics analysis

  • Authors:
  • Andrew J. Aubrey;Gary K. L. Tam;David Marshall;Paul L. Rosin;Hui Fang;Phil W. Grant;Min Chen

  • Affiliations:
  • Cardiff University;Cardiff University;Cardiff University;Cardiff University;Swansea University;Swansea University;Swansea University

  • Venue:
  • Proceedings of the SSPNET 2nd International Symposium on Facial Analysis and Animation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The automatic analysis and classification of facial expressions is a challenging problem. It has been an active area of research for many years, with a range of applications, including affective computing, human computer interaction and lipreading. Although it is possible to capture an expression using still images, videos are more widely used in facial dynamics analysis [Zhang et al. 2008] with the advantage that the onset, peak and offset duration of the expression can be captured and used. The facial action coding system (FACS) is most commonly used for analysing facial behaviour. It segments face into action units (AUs) and relates them to the contraction of a specific or set of facial muscles. To automatically assign AUs, such system requires facial feature extraction.