Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Real-time speech motion synthesis from recorded motions
SCA '04 Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation
Mood swings: expressive speech animation
ACM Transactions on Graphics (TOG)
Automatic determination of facial muscle activations from sparse motion capture marker data
ACM SIGGRAPH 2005 Papers
Transferable videorealistic speech animation
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces
IEEE Transactions on Visualization and Computer Graphics
eFASE: expressive facial animation synthesis and editing with phoneme-isomap controls
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Simulating speech with a physics-based facial muscle model
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Dynamic, expressive speech animation from a single mesh
SCA '07 Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation
Facial performance synthesis using deformation-driven polynomial displacement maps
ACM SIGGRAPH Asia 2008 papers
Realtime performance-based facial animation
ACM SIGGRAPH 2011 papers
Hi-index | 0.00 |
One of the holy grails of computer graphics is the generation of photorealistic images with motion data. To re-generate convincing human animations might not be the most challenging part, but it is definitely one of ultimate goals for computer graphics. Amongst full-body human animations, facial animation is the challenging part because of its subtlety and familarity to human beings. In this paper, we like to share the work of lip-sync animation, part of facial animations, as a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts.