Extracting emotion from speech: towards emotional speech-driven facial animations

  • Authors:
  • Olusola Olumide Aina;Knut Hartmann;Thomas Strothotte

  • Affiliations:
  • Department of Simulation and Graphics, Otto-von-Guericke University of Magdeburg, Magdeburg, Germany;Department of Simulation and Graphics, Otto-von-Guericke University of Magdeburg, Magdeburg, Germany;Department of Simulation and Graphics, Otto-von-Guericke University of Magdeburg, Magdeburg, Germany

  • Venue:
  • SG'03 Proceedings of the 3rd international conference on Smart graphics
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Facial expressions and characteristics of speech are exploited intuitively by humans to infer the emotional status of their partners in communication. This paper investigates ways to extract emotion from spontaneous speech, aiming at transferring emotions to appropriate facial expressions of the speaker's virtual representatives. Hence, this paper presents one step towards an emotional speech-driven facial animation system, promises to be the first true non-human animation assistant. Different classifier-algorithms (support vector machines, neural networks, and decision trees) were compared in extracting emotion from speech features. Results show that these machine-learning algorithms outperform human subjects extracting emotion fromspeech alone if there is no access to additional cues onto the emotional state.