Fundamentals of speech recognition
Fundamentals of speech recognition
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Discrete Time Processing of Speech Signals
Discrete Time Processing of Speech Signals
Describing the emotional states that are expressed in speech
Speech Communication - Special issue on speech and emotion
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Ensemble methods for spoken emotion recognition in call-centres
Speech Communication
Fear-type emotion recognition for future audio-based surveillance systems
Speech Communication
Real-Life Emotion Recognition in Speech
Speaker Classification II
Emotion Recognition Based on Physiological Changes in Music Listening
IEEE Transactions on Pattern Analysis and Machine Intelligence
Emotional states in judicial courtrooms: An experimental investigation
Speech Communication
Hi-index | 0.00 |
The recognition of the emotional states of speaker is a multi-disciplinary research area that has received great interest in the last years. One of the most important goals is to improve the voiced-based human-machine interactions. Recent works on this domain use the proso-dic features and the spectrum characteristics of speech signal, with standard classifier methods. Furthermore, for traditional methods the improvement in performance has also found a limit. In this paper, the spectral characteristics of emotional signals are used in order to group emotions. Standard classifiers based on Gaussian Mixture Models, Hidden Markov Models and Multilayer Perceptron are tested. These classifiers have been evaluated in different configurations with different features, in order to design a new hierarchical method for emotions classification. The proposed multiple feature hierarchical method improves the performance in 6.35% over the standard classifiers.