BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Dynamic Free-Form Deformations for Animation Synthesis
IEEE Transactions on Visualization and Computer Graphics
MPEG-4 Facial Animation: The Standard, Implementation and Applications
MPEG-4 Facial Animation: The Standard, Implementation and Applications
Life-Like Characters: Tools, Affective Functions, and Applications (Cognitive Technologies)
Life-Like Characters: Tools, Affective Functions, and Applications (Cognitive Technologies)
Xface: MPEG-4 based open source toolkit for 3D Facial Animation
Proceedings of the working conference on Advanced visual interfaces
Synthetic characters as multichannel interfaces
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
An avatar acceptance study for home automation scenarios
AMDO'12 Proceedings of the 7th international conference on Articulated Motion and Deformable Objects
Hi-index | 0.00 |
In this paper, XfaceEd, our open source, platform independent tool for authoring 3D embodied conversational agents (ECAs) is presented. Following MPEG-4 Facial Animation (FA) standard, XfaceEd provides an easy to use interface to generate MPEG-4 ready ECAs from static 3D models. Users can set MPEG-4 Facial Definition Points (FDP) and Facial Animation Parameter Units (FAPU), define the zone of influence of each feature point and how this influence is propagated among the neighboring vertices. As an alternative to MPEG-4, one can also specify morph targets for different categories such as visemes, emotions and expressions, in order to achieve facial animation using the keyframe interpolation technique. Morph targets from different categories are blended to create more lifelike behaviour.Results can be previewed and parameters can be tweaked real time within the application for fine tuning. Changes made take into effect immediately, which in turn ensures rapid production. The final output is a configuration file XML format and can be interpreted by XfacePlayer or other applications for easy authoring of embodied conversational agents for multimodal environments.