Expression of Emotions in Virtual Humans Using Lights, Shadows, Composition and Filters
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Creative expression of emotions in virtual humans
Proceedings of the 4th International Conference on Foundations of Digital Games
Expression of Emotions Using Wrinkles, Blushing, Sweating and Tears
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Expression of Moral Emotions in Cooperating Agents
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
SceneMaker: automatic visualisation of screenplays
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence
SceneMaker: multimodal visualisation of natural language film scripts
KES'10 Proceedings of the 14th international conference on Knowledge-based and intelligent information and engineering systems: Part IV
SceneMaker: intelligent multimodal visualization of natural language scripts
AICS'09 Proceedings of the 20th Irish conference on Artificial intelligence and cognitive science
A story about gesticulation expression
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
This work proposes a real-time virtual human multimodal expression model. Five modalities explore the affordances of the body: deterministic, non-deterministic, gesticulation, facial, and vocal expression. Deterministic expression is keyframe body animation. Non-deterministic expression is robotics-based procedural body animation. Vocal expression is voice synthesis, through Festival, and parameterization, through SABLE. Facial expression is lip-synch and emotion expression through a parametric muscle-based face model. Inspired by psycholinguistics, gesticulation expression is unconventional, idiosyncratic, and unconscious hand gestures animation described as sequences of Portuguese Sign Language hand shapes, positions and orientations. Inspired by the arts, one modality goes beyond the body to explore the affordances of the environment and express emotions through camera, lights, and music. To control multimodal expression, this work proposes a high-level integrated synchronized markup language—expressive markup language. Finally, three studies, involving a total of 197 subjects, evaluated the model in storytelling contexts and produced promising results. Copyright © 2006 John Wiley & Sons, Ltd.