The EMOTE model for effort and shape
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Sample-Based Synthesis of Photo-Realistic Talking Heads
CA '98 Proceedings of the Computer Animation
Computer
Mood swings: expressive speech animation
ACM Transactions on Graphics (TOG)
ALMA: a layered model of affect
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Chaos and Graphics: Maxine: A platform for embodied animated agents
Computers and Graphics
The Behavior Markup Language: Recent Developments and Challenges
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
EMBR --- A Realtime Animation Engine for Interactive Embodied Agents
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Finite-state machine based distributed framework DATA for intelligent ambience systems
CIMMACS'09 Proceedings of the 8th WSEAS International Conference on Computational intelligence, man-machine systems and cybernetics
The SEMAINE API: towards a standards-based framework for building emotion-oriented systems
Advances in Human-Computer Interaction - Special issue on emotion-aware natural interaction
[HUGE]: universal architecture for statistically based HUman GEsturing
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
M-Face: An Appearance-Based Photorealistic Model for Multiple Facial Attributes Rendering
IEEE Transactions on Circuits and Systems for Video Technology
Real-Time Multimodal Human–Avatar Interaction
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
In this paper a new modular framework (EVA framework) and expressive embodied conversational agent EVA are presented. From talking heads to fully animatable bodies, and by techniques such as behavioral modeling and emotion modeling, researches are trying to present interaction interfaces providing as natural behavior as possible. The ECA EVA presented in this article is a mesh based multi-part model supporting both bone and morph target based animation. EVA can speak and perform body gestures such as facial gestures, gaze, hand gestures, etc. Each gesture is described by composition of movements of one or more base elements (bones and/or morphs), and can further be fine tuned by using time (speed) and space attributes (stress). Each gesture can therefore be unique, and either predefined ("offline" modeling) or described by xml based description ("online modeling"). The suggested multi-part model concept enables EVA to perform several gestures simultaneously and independently from each other (e.g. emotion blending, expressive speech). Additionally, the multi-part concept also enables for each of the model parts to be easily updated even whilst the EVA is being animated. The embodied conversational agent EVA, presented in this article, provides both personalization of its behavior (gesture level) and personalization of its outlook. EVA's animation engine is Panda 3D based and fully supports Python programming language.