Robust recognition of emotion from speech

  • Authors:
  • Mohammed E. Hoque;Mohammed Yeasin;Max M. Louwerse

  • Affiliations:
  • Department of Electrical and Computer Engineering / Institute for Intelligent Systems;Department of Electrical and Computer Engineering / Institute for Intelligent Systems;Department of Psychology / Institute for Intelligent Systems, The University of Memphis, Memphis, TN

  • Venue:
  • IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents robust recognition of a subset of emotions by animated agents from salient spoken words. To develop and evaluate the model for each emotion from the chosen subset, both the prosodic and acoustic features were used to extract the intonational patterns and correlates of emotion from speech samples. The computed features were projected using a combination of linear projection techniques for compact and clustered representation of features. The projected features were used to build models of emotions using a set of classifiers organized in hierarchical fashion. The performances of the models were obtained using number of classifiers from the WEKA machine learning toolbox. Empirical analysis indicated that the lexical information computed from both the prosodic and acoustic features at word level yielded robust classification of emotions.