Fundamentals of speech recognition
Fundamentals of speech recognition
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Discrete Time Processing of Speech Signals
Discrete Time Processing of Speech Signals
Describing the emotional states that are expressed in speech
Speech Communication - Special issue on speech and emotion
Ensemble methods for spoken emotion recognition in call-centres
Speech Communication
Automatic discrimination between laughter and speech
Speech Communication
Applying an analysis of acted vocal emotions to improve the simulation of synthetic speech
Computer Speech and Language
Fear-type emotion recognition for future audio-based surveillance systems
Speech Communication
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Emotion Recognition Based on Physiological Changes in Music Listening
IEEE Transactions on Pattern Analysis and Machine Intelligence
Emotion recognition from speech signals using new harmony features
Signal Processing
Spoken emotion recognition through optimum-path forest classification using glottal features
Computer Speech and Language
Computer Speech and Language
Detecting emotional state of a child in a conversational computer game
Computer Speech and Language
Emotion recognition using a hierarchical binary decision tree approach
Speech Communication
Fuzzy cognitive maps for artificial emotions forecasting
Applied Soft Computing
Class-specific multiple classifiers scheme to recognize emotions from speech signals
Computer Speech and Language
Hi-index | 0.00 |
The recognition of the emotional state of speakers is a multi-disciplinary research area that has received great interest over the last years. One of the most important goals is to improve the voice-based human-machine interactions. Several works on this domain use the prosodic features or the spectrum characteristics of speech signal, with neural networks, Gaussian mixtures and other standard classifiers. Usually, there is no acoustic interpretation of types of errors in the results. In this paper, the spectral characteristics of emotional signals are used in order to group emotions based on acoustic rather than psychological considerations. Standard classifiers based on Gaussian Mixture Models, Hidden Markov Models and Multilayer Perceptron are tested. These classifiers have been evaluated with different configurations and input features, in order to design a new hierarchical method for emotion classification. The proposed multiple feature hierarchical method for seven emotions, based on spectral and prosodic information, improves the performance over the standard classifiers and the fixed features.