ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
Neural Networks - Special issue: Emotion and brain
EmoVoice -- A Framework for Online Recognition of Emotions from Voice
PIT '08 Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems
Descriptive temporal template features for visual motion recognition
Pattern Recognition Letters
Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing
Recognition of understanding level and language skill using measurements of reading behavior
Proceedings of the 19th international conference on Intelligent User Interfaces
Hi-index | 0.00 |
This study focuses on the development of a real-time automatic affect recognition system. It adapts a multimodal approach, where affect information taken from two modalities are combined to arrive at an emotion label that is represented in a valence-arousal space. The SEMAINE Database was used to build the affect model. Prosodic and spectral features were used to predict affect from the voice. Temporal templates called Motion History Images (MHI) were used to predict affect from the face. Prediction results from the face and voice models were combined using decision-level fusion. Using support vector machine for regression (SVR), the system was able to correctly identify affect label with a root mean square error (RMSE) of 0.2899 for arousal, and 0.2889 for valence.