Toward Machine Emotional Intelligence: Analysis of Affective Physiological State
IEEE Transactions on Pattern Analysis and Machine Intelligence - Graph Algorithms and Computer Vision
Emotional speech: towards a new generation of databases
Speech Communication - Special issue on speech and emotion
To feel or not to feel: the role of affect in human-computer interaction
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Affective multimodal human-computer interaction
Proceedings of the 13th annual ACM international conference on Multimedia
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Annotating multimodal behaviors occurring during non basic emotions
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Emotion analysis in man-machine interaction systems
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
Automatic temporal segment detection and affect recognition from face and body display
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Hi-index | 0.00 |
A first step in developing and testing a robust affective multimodal system is to obtain or access data representing human multimodal expressive behaviour. Collected affect data has to be further annotated in order to become usable for the automated systems. Most of the existing studies of emotion or affect annotation are monomodal. Instead, in this paper, we explore how independent human observers annotate affect display from monomodal face data compared to bimodal face-and-body data. To this aim we collected visual affect data by recording the face and face-and-body simultaneously. We then conducted a survey by asking human observers to view and label the face and face-and-body recordings separately. The results obtained show that in general, viewing face-and-body simultaneously helps with resolving the ambiguity in annotating emotional behaviours.