Multiple feature extraction and hierarchical classifiers for emotions recognition

  • Authors:
  • Enrique M. Albornoz;Diego H. Milone;Hugo L. Rufiner

  • Affiliations:
  • Centro de I+D en Señales, Sistemas e INteligencia Computacional (SINC(i)) Facultad de Ingeniería y Ciencias Hídricas, Universidad Nacional del Litoral, Ciudad Universitaria, Paraje ...;Centro de I+D en Señales, Sistemas e INteligencia Computacional (SINC(i)) Facultad de Ingeniería y Ciencias Hídricas, Universidad Nacional del Litoral, Ciudad Universitaria, Paraje ...;Centro de I+D en Señales, Sistemas e INteligencia Computacional (SINC(i)) Facultad de Ingeniería y Ciencias Hídricas, Universidad Nacional del Litoral, Ciudad Universitaria, Paraje ...

  • Venue:
  • COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The recognition of the emotional states of speaker is a multi-disciplinary research area that has received great interest in the last years. One of the most important goals is to improve the voiced-based human-machine interactions. Recent works on this domain use the proso-dic features and the spectrum characteristics of speech signal, with standard classifier methods. Furthermore, for traditional methods the improvement in performance has also found a limit. In this paper, the spectral characteristics of emotional signals are used in order to group emotions. Standard classifiers based on Gaussian Mixture Models, Hidden Markov Models and Multilayer Perceptron are tested. These classifiers have been evaluated in different configurations with different features, in order to design a new hierarchical method for emotions classification. The proposed multiple feature hierarchical method improves the performance in 6.35% over the standard classifiers.