Emotion Classification of Audio Signals Using Ensemble of Support Vector Machines
PIT '08 Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems
Multi-stage classification of emotional speech motivated by a dimensional emotion model
Multimedia Tools and Applications
Automatic inference of complex affective states
Computer Speech and Language
Emotional-speech recognition using the neuro-fuzzy network
Proceedings of the 6th International Conference on Ubiquitous Information Management and Communication
Hi-index | 0.00 |
The purpose of this paper is to make an automatic classification of speech into seven emotional classes as anger, boredom, disgust, fear, gladness, neutral and sadness. A two-stage classification composed of several sub-classifiers is proposed. A feature set with 68 features has been computed over 286 speech samples from the Berlin database. The Sequential Forward Selection method (SFS) has been used for each classifiers of the two stages in order to decide the feature subsets in each step. The result for the first stage as three-state classification is 87%, and the global result of the seven emotional classes is 78%, where the correct recognition rate of random classification by chance is about 15%.