Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities

  • Authors:
  • Stavros Petridis;Hatice Gunes;Sebastian Kaltwang;Maja Pantic

  • Affiliations:
  • Imperial College London, London, United Kingdom;Imperial College London, London, United Kingdom;University of Karlsruhe, Karlsruhe, Germany;Imperial College London, London, United Kingdom

  • Venue:
  • Proceedings of the 2009 international conference on Multimodal interfaces
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including the type of feature representation, choice of static vs. dynamic classification schemes, the number and type of cues or modalities to use, and the optimal way of fusing these, remain open research questions. This paper compares frame-based vs window-based feature representation and employs static vs. dynamic classification schemes for two distinct problems in the field of automatic human nonverbal behavior analysis: multicue discrimination between posed and spontaneous smiles from facial expressions, head and shoulder movements, and audio-visual discrimination between laughter and speech. Single cue and single modality results are compared to multicue and multimodal results by employing Neural Networks, Hidden Markov Models (HMMs), and 2- and 3-chain coupled HMMs. Subject independent experimental evaluation shows that: 1) both for static and dynamic classification, fusing data coming from multiple cues and modalities proves useful to the overall task of recognition, 2) the type of feature representation appears to have a direct impact on the classification performance, and 3) static classification is comparable to dynamic classification both for multicue discrimination between posed and spontaneous smiles, and audio-visual discrimination between laughter and speech.