On the Influence of Phonetic Content Variation for Acoustic Emotion Recognition

  • Authors:
  • Bogdan Vlasenko;Björn Schuller;Andreas Wendemuth;Gerhard Rigoll

  • Affiliations:
  • Cognitive Systems, IESK, Otto-von-Guericke University, Magdeburg, Germany;Institute for Human-Machine Communication, Technische Universität München, Germany;Cognitive Systems, IESK, Otto-von-Guericke University, Magdeburg, Germany;Institute for Human-Machine Communication, Technische Universität München, Germany

  • Venue:
  • PIT '08 Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Acoustic Modeling in today's emotion recognition engines employs general models independent of the spoken phonetic content. This seems to work well enough given sufficient instances to cover for a broad variety of phonetic structures and emotions at the same time. However, data is usually sparse in the field and the question arises whether unit specific models as word emotion models could outperform the typical general models. In this respect this paper tries to answer the question how strongly acoustic emotion models depend on the textual and phonetic content. We investigate the influence on the turn and word level by use of state-of-the-art techniques for frame and word modeling on the well-known public Berlin Emotional Speech and Speech Under Simulated and Actual Stress databases. In the result it is clearly shown that the phonetic structure does strongly influence the accuracy of emotion recognition.