A Systematic Comparison of Different HMM Designs for Emotion Recognition from Acted and Spontaneous Speech

  • Authors:
  • Johannes Wagner;Thurid Vogt;Elisabeth André

  • Affiliations:
  • Multimedia concepts and applications, Augsburg University, Germany;Multimedia concepts and applications, Augsburg University, Germany;Multimedia concepts and applications, Augsburg University, Germany

  • Venue:
  • ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work we elaborate the use of hidden Markov models (HMMs) for speech emotion recognition as a dynamic alternative to static modelling approaches. Since previous work on this field does not yet define a clear line which HMM design should be prioritised for this task, we run a systematic analysis of different HMM configurations. Furthermore, experiments are carried out on an acted and a spontaneous emotions corpus, since little is known about the suitability of HMMs for spontaneous speech. Additionally, we consider two different segmentation levels, namely words and utterances. Results are compared with the outcome of a support vector machine classifier trained on global statistics features. While for both databases similar performance was observed on utterance level, the HMM-based approach outperformed static classification on word level. However, setting up general guidelines which kind of models are best suited appeared to be rather difficult.