Instance-Based Learning Algorithms
Machine Learning
Classification by pairwise coupling
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
How to find trouble in communication
Speech Communication - Special issue on speech and emotion
Modeling drivers' speech under stress
Speech Communication - Special issue on speech and emotion
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
An introduction to variable and feature selection
The Journal of Machine Learning Research
The eNTERFACE'05 Audio-Visual Emotion Database
ICDEW '06 Proceedings of the 22nd International Conference on Data Engineering Workshops
Real-Life Emotion Recognition in Speech
Speaker Classification II
Multi-level Speech Emotion Recognition Based on HMM and ANN
CSIE '09 Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering - Volume 07
Audio-Based Emotion Recognition in Judicial Domain: A Multilayer Support Vector Machines Approach
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
Image and Vision Computing
Representative and Discriminant Feature Extraction Based on NMF for Emotion Recognition in Speech
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part I
Applying Articulatory Features to Speech Emotion Recognition
ICRCCS '09 Proceedings of the 2009 International Conference on Research Challenges in Computer Science
Emotion recognition from speech signals using new harmony features
Signal Processing
ICIC'07 Proceedings of the intelligent computing 3rd international conference on Advanced intelligent computing theories and applications
WEKA---Experiences with a Java Open-Source Project
The Journal of Machine Learning Research
Multiple feature extraction and hierarchical classifiers for emotions recognition
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Hi-index | 0.00 |
Thanks to the recent progress in the judicial proceedings management, especially related to the introduction of audio/video recording facilities, the challenge of identification of emotional states can be tackled. Discovering affective states embedded into speech signals could help in semantic retrieval of multimedia clips, and therefore in a deep understanding of mechanisms behind courtroom debates and judges/jurors decision making processes. In this paper two main contributions are given: (1) the collection of real-world human emotions coming from courtroom audio recordings; (2) the investigation of a hierarchical classification system, based on a risk minimization method, able to recognize emotional states from speech signatures. The accuracy of the proposed classification approach - named Multilayer Support Vector Machines - has been evaluated by comparing its performance with traditional machine learning approaches, by using both benchmark datasets and real courtroom recordings. Results in recognition obtained by the proposed technique outperform the prediction power achieved by traditional approaches like SVM, k-Nearest Neighbors, Naive Bayes, Decision Trees and Bayesian Networks.