Assessing agreement on classification tasks: the kappa statistic
Computational Linguistics
Describing the emotional states that are expressed in speech
Speech Communication - Special issue on speech and emotion
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
2005 Special Issue: Emotion recognition in human-computer interaction
Neural Networks - Special issue: Emotion and brain
Real-Life emotion representation and detection in call centers data
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
On the Necessity and Feasibility of Detecting a Driver's Emotional State While Driving
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Features extraction for speech emotion
Journal of Computational Methods in Sciences and Engineering
IEEE Transactions on Multimedia
Automatic recognition of speech emotion using long-term spectro-temporal features
DSP'09 Proceedings of the 16th international conference on Digital Signal Processing
Acoustical implicit communication in human-robot interaction
Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments
A learning approach to hierarchical feature selection and aggregation for audio classification
Pattern Recognition Letters
Automatic speech emotion recognition using modulation spectral features
Speech Communication
Nonverbal acoustic communication in human-computer interaction
Artificial Intelligence Review
A novel emotion recognizer from speech using both prosodic and linguistic features
KES'11 Proceedings of the 15th international conference on Knowledge-based and intelligent information and engineering systems - Volume Part I
ikannotate - a tool for labelling, transcription, and annotation of emotionally coloured speech
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Active class selection for arousal classification
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Dimensionality reduction and classification analysis on the audio section of the SEMAINE database
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
J-HGBU '11 Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding
Cultural dependency analysis for understanding speech emotion
Expert Systems with Applications: An International Journal
A multitask approach to continuous five-dimensional affect sensing in natural speech
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special Issue on Affective Interaction in Natural Environments
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Shape-based modeling of the fundamental frequency contour for emotion detection in speech
Computer Speech and Language
Continuous emotion recognition with phonetic syllables
Speech Communication
Class-specific multiple classifiers scheme to recognize emotions from speech signals
Computer Speech and Language
Hi-index | 0.00 |
Emotion primitive descriptions are an important alternative to classical emotion categories for describing a human's affective expressions. We build a multi-dimensional emotion space composed of the emotion primitives of valence, activation, and dominance. In this study, an image-based, text-free evaluation system is presented that provides intuitive assessment of these emotion primitives, and yields high inter-evaluator agreement. An automatic system for estimating the emotion primitives is introduced. We use a fuzzy logic estimator and a rule base derived from acoustic features in speech such as pitch, energy, speaking rate and spectral characteristics. The approach is tested on two databases. The first database consists of 680 sentences of 3 speakers containing acted emotions in the categories happy, angry, neutral, and sad. The second database contains more than 1000 utterances of 47 speakers with authentic emotion expressions recorded from a television talk show. The estimation results are compared to the human evaluation as a reference, and are moderately to highly correlated (0.42r Finally, continuous-valued estimates of the emotion primitives are mapped into the given emotion categories using a k-nearest neighbor classifier. An overall recognition rate of up to 83.5% is accomplished. The errors of the direct emotion estimation are compared to the confusion matrices of the classification from primitives. As a conclusion to this continuous-valued emotion primitives framework, speaker-dependent modeling of emotion expression is proposed since the emotion primitives are particularly suited for capturing dynamics and intrinsic variations in emotion expression.