Observer annotation of affective display and evaluation of expressivity: face vs. face-and-body
VisHCI '06 Proceedings of the HCSNet workshop on Use of vision in human-computer interaction - Volume 56
Bi-modal emotion recognition from expressive face and body gestures
Journal of Network and Computer Applications
Visual inference of human emotion and behaviour
Proceedings of the 9th international conference on Multimodal interfaces
Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech
Affect and Emotion in Human-Computer Interaction
Automatic temporal segment detection and affect recognition from face and body display
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
On assisting a visual-facial affect recognition system with keyboard-stroke pattern information
Knowledge-Based Systems
SceneMaker: automatic visualisation of screenplays
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence
Visual affect recognition
SceneMaker: multimodal visualisation of natural language film scripts
KES'10 Proceedings of the 14th international conference on Knowledge-based and intelligent information and engineering systems: Part IV
SceneMaker: intelligent multimodal visualization of natural language scripts
AICS'09 Proceedings of the 20th Irish conference on Artificial intelligence and cognitive science
3D corpus of spontaneous complex mental states
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Interpreting hand-over-face gestures
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
A multi-modal emotion recognition system for persistent and non-invasive personal health monitoring
Proceedings of the 2nd Conference on Wireless Health
Human face analysis: from identity to emotion and intention recognition
ICEB'10 Proceedings of the Third international conference on Ethics and Policy of Biometrics and International Data Sharing
Image and Vision Computing
Hi-index | 0.00 |
To be able to develop and test robust affective multimodal systems, researchers need access to novel databases containing representative samples of human multi-modal expressive behavior. The creation of such databases requires a major effort in the definition of representative behaviors, the choice of expressive modalities, and the collection and labeling of large amount of data. At present, public databases only exist for single expressive modalities such as facial expression analysis. There also exist a number of gesture databases of static and dynamic hand postures and dynamic hand gestures. However, there is not a readily available database combining affective face and body information in a genuine bimodal manner. Accordingly, in this paper, we present a bimodal database recorded by two highresolution cameras simultaneously for use in automatic analysis of human nonverbal affective behavior.