2005 Special Issue: Emotion recognition in human-computer interaction
Neural Networks - Special issue: Emotion and brain
Investigating glottal parameters for differentiating emotional categories with similar prosodics
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Opensmile: the munich versatile and fast open-source audio feature extractor
Proceedings of the international conference on Multimedia
Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications
IEEE Transactions on Affective Computing
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing
Interdependencies among Voice Source Parameters in Emotional Speech
IEEE Transactions on Affective Computing
AVEC 2011-the first international audio/visual emotion challenge
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Estimation of Glottal Closure Instants in Voiced Speech Using the DYPSA Algorithm
IEEE Transactions on Audio, Speech, and Language Processing
Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection
IEEE Transactions on Audio, Speech, and Language Processing
Hi-index | 0.00 |
The purpose of this paper is to study the performance of glottal waveform parameters and TEO in distinguishing binary classes of four emotion dimensions (activation, expectation, power, and valence) using authentic emotional speech. The two feature sets were compared with a 1941-dimension acoustic feature set including prosodic, spectral, and other voicing related features extracted using openSMILE toolkit. The comparison work highlight the discrimination ability of TEO in emotion dimensions activation and power, and glottal parameters in expectation and valence for authentic speech data. Using the same classification methodology, TEO and glottal parameter outperformed or performed similarly to the prosodic, spectral and other voicing related features (i.e., the feature set obtained using openSMILE).