Evidence Theory-Based Multimodal Emotion Recognition

  • Authors:
  • Marco Paleari;Rachid Benmokhtar;Benoit Huet

  • Affiliations:
  • EURECOM, Sophia Antipolis, France;EURECOM, Sophia Antipolis, France;EURECOM, Sophia Antipolis, France

  • Venue:
  • MMM '09 Proceedings of the 15th International Multimedia Modeling Conference on Advances in Multimedia Modeling
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic recognition of human affective states is still a largely unexplored and challenging topic. Even more issues arise when dealing with variable quality of the inputs or aiming for real-time, unconstrained, and person independent scenarios. In this paper, we explore audio-visual multimodal emotion recognition. We present SAMMI, a framework designed to extract real-time emotion appraisals from non-prototypical, person independent, facial expressions and vocal prosody. Different probabilistic method for fusion are compared and evaluated with a novel fusion technique called NNET. Results shows that NNET can improve the recognition score (CR + ) of about 19% and the mean average precision of about 30% with respect to the best unimodal system.