Regularized radial basis functional networks: theory and applications
Regularized radial basis functional networks: theory and applications
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Neural Networks - 2005 Special issue: IJCNN 2005
2005 Special Issue: Emotion recognition in human-computer interaction
Neural Networks - Special issue: Emotion and brain
2005 Special Issue: Challenges in real-life emotion annotation and machine learning based detection
Neural Networks - Special issue: Emotion and brain
Neural Networks - Special issue: Emotion and brain
Modeling naturalistic affective states via facial and vocal expressions recognition
Proceedings of the 8th international conference on Multimodal interfaces
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Neural Computation
Primitives-based evaluation and estimation of emotions in speech
Speech Communication
Frame vs. Turn-Level: Emotion Recognition from Speech Considering Static and Dynamic Processing
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image and Vision Computing
Computer Speech and Language
Advances in Human-Computer Interaction - Special issue on emotion-aware natural interaction
Opensmile: the munich versatile and fast open-source audio feature extractor
Proceedings of the international conference on Multimedia
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Audio-Visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Automatic measurement of affect in dimensional and continuous spaces: why, what, and how?
Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research
Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies
IEEE Transactions on Affective Computing
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Bidirectional LSTM networks for improved phoneme classification and recognition
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Cost-effective solution to synchronised audio-visual data capture using multiple sensors
Image and Vision Computing
Bidirectional recurrent neural networks
IEEE Transactions on Signal Processing
A Framework for Automatic Human Emotion Classification Using Emotion Profiles
IEEE Transactions on Audio, Speech, and Language Processing
Hi-index | 0.00 |
Automatic affect recognition is important for the ability of future technical systems to interact with us socially in an intelligent way by understanding our current affective state. In recent years there has been a shift in the field of affect recognition from “in the lab” experiments with acted data to “in the wild” experiments with spontaneous and naturalistic data. Two major issues thereby are the proper segmentation of the input and adequate description and modeling of affective states. The first issue is crucial for responsive, real-time systems such as virtual agents and robots, where the latency of the analysis must be as small as possible. To address this issue we introduce a novel method of incremental segmentation to be used in combination with supra-segmental modeling. For modeling of continuous affective states we use Long Short-Term Memory Recurrent Neural Networks, with which we can show an improvement in performance over standard recurrent neural networks and feed-forward neural networks as well as Support Vector Regression. For experiments we use the SEMAINE database, which contains recordings of spontaneous and natural human to Wizard-of-Oz conversations. The recordings are annotated continuously in time and magnitude with FeelTrace for five affective dimensions, namely activation, expectation, intensity, power/dominance, and valence. To exploit dependencies between the five affective dimensions we investigate multitask learning of all five dimensions augmented with inter-rater standard deviation. We can show improvements for multitask over single-task modeling. Correlation coefficients of up to 0.81 are obtained for the activation dimension and up to 0.58 for the valence dimension. The performance for the remaining dimensions were found to be in between that for activation and valence.