Audio-visual spontaneous emotion recognition

  • Authors:
  • Zhihong Zeng;Yuxiao Hu;Glenn I. Roisman;Zhen Wen;Yun Fu;Thomas S. Huang

  • Affiliations:
  • University of Illinois at Urbana-Champaign;University of Illinois at Urbana-Champaign;University of Illinois at Urbana-Champaign;IBM T.J. Watson Research Center;University of Illinois at Urbana-Champaign;University of Illinois at Urbana-Champaign

  • Venue:
  • ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic multimodal recognition of spontaneous emotional expressions is a largely unexplored and challenging problem. In this paper, we explore audio-visual emotion recognition in a realistic human conversation setting--the Adult Attachment Interview (AAI). Based on the assumption that facial expression and vocal expression are at the same coarse affective states, positive and negative emotion sequences are labeled according to Facial Action Coding System. Facial texture in visual channel and prosody in audio channel are integrated in the framework of Adaboost multi-stream hidden Markov model (AdaMHMM) in which the Adaboost learning scheme is used to build component HMM fusion. Our approach is evaluated in AAI spontaneous emotion recognition experiments.