Embodied contextual agent in information delivering application
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
A multilayer personality model
Proceedings of the 2nd international symposium on Smart graphics
Cooperation and Flexibility in Multimodal Communication
CMC '98 Revised Papers from the Second International Conference on Cooperative Multimodal Communication
Emotional speech: towards a new generation of databases
Speech Communication - Special issue on speech and emotion
Formational Parameters and Adaptive Prototype Instantiation for MPEG-4 Compliant Gesture Synthesis
CA '02 Proceedings of the Computer Animation
Coordination and context-dependence in the generation of embodied conversation
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
Eye communication in a conversational 3D synthetic agent
AI Communications
Behavior planning for a reflexive agent
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Cross-cultural differences in recognizing affect from body posture
Interacting with Computers
Annotating multimodal behaviors occurring during non basic emotions
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Hi-index | 0.01 |
In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions).