Performance-driven facial animation
SIGGRAPH '90 Proceedings of the 17th annual conference on Computer graphics and interactive techniques
Layered compositing of facial expression
ACM SIGGRAPH 97 Visual Proceedings: The art and interdisciplinary programs of SIGGRAPH '97
Animated deformations with radial basis functions
VRST '00 Proceedings of the ACM symposium on Virtual reality software and technology
Computer generated animation of faces
ACM '72 Proceedings of the ACM annual conference - Volume 1
Vision-based control of 3D facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
MPEG-4 Facial Animation: The Standard, Implementation and Applications
MPEG-4 Facial Animation: The Standard, Implementation and Applications
Xface: MPEG-4 based open source toolkit for 3D Facial Animation
Proceedings of the working conference on Advanced visual interfaces
The Behavior Markup Language: Recent Developments and Challenges
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Feature points based facial animation retargeting
Proceedings of the 2008 ACM symposium on Virtual reality software and technology
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Efficient distribution of emotion-related data through a networked virtual environment architecture
Computer Animation and Virtual Worlds - International Workshop Motion in Games (MIG08)
Intelligent Expression Blending for Performance Driven Facial Animation
IEEE Transactions on Consumer Electronics
Control of speech-related facial movements of an avatar from video
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Control of speech-related facial movements of an avatar from video
Speech Communication
Hi-index | 0.00 |
This paper presents a model to generate personalized facial animations for avatars using Performance Driven Animation (PDA). This approach allows the users to reflect their face expressions in his/her avatar, considering as input a small set of feature points provided by Computer Vision (CV) tracking algorithms. The model is based on the MPEG-4 Facial Animation standard, and uses a hierarchy of the animation parameters to provide animation of face regions where it lacks CV data. To deform the face, we use two skin mesh deformation methods, which are computationally cheap and provide avatar animation in real time. We performed an evaluation with subjects in order to qualitatively evaluate our method. Results show that the proposed model can generate coherent and visually satisfactory animations.