Emotional speech: towards a new generation of databases
Speech Communication - Special issue on speech and emotion
Formational Parameters and Adaptive Prototype Instantiation for MPEG-4 Compliant Gesture Synthesis
CA '02 Proceedings of the Computer Animation
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
Observer annotation of affective display and evaluation of expressivity: face vs. face-and-body
VisHCI '06 Proceedings of the HCSNet workshop on Use of vision in human-computer interaction - Volume 56
3D Audiovisual Rendering and Real-Time Interactive Control of Expressivity in a Talking Head
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Personal and Ubiquitous Computing
Studies on gesture expressivity for a virtual agent
Speech Communication
Expressive virtual modalities for augmenting the perception of affective movements
Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots
Perception of blended emotions: from video corpus to expressive agent
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Hi-index | 0.00 |
The design of affective interfaces such as credible expressive characters in story-telling applications requires the understanding and the modeling of relations between realistic emotions and behaviors in different modalities such as facial expressions, speech, hand gestures and body movements. Yet, research on emotional multimodal behaviors has focused on individual modalities during acted basic emotions. In this paper we describe the coding scheme that we have designed for annotating multimodal behaviors observed during mixed and non acted emotions. We explain how we used it for the annotation of videos from a corpus of emotionally rich TV interviews. We illustrate how the annotations can be used to compute expressive profiles of videos and relations between non basic emotions and multimodal behaviors.