A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions

  • Authors:
  • Zhihong Zeng;Maja Pantic;Glenn I. Roisman;Thomas S. Huang

  • Affiliations:
  • University of Illinois at Urbana-Champaign, Urbana;Imperial College London, London and the University of Twente, Netherlands;University of Illinois at Urbana-Champaign, Urbana;University of Illinois at Urbana-Champaign, Urbana

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.14

Visualization

Abstract

Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.