Style learning and transferring for facial animation editing
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Spacetime expression cloning for blendshapes
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.