A muscle model for animation three-dimensional facial expression
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Video Rewrite: driving visual speech with audio
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Trainable videorealistic speech animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Vision-based control of 3D facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Radial Basis Functions
Automatic determination of facial muscle activations from sparse motion capture marker data
ACM SIGGRAPH 2005 Papers
Transferable videorealistic speech animation
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Levels of representation in the annotation of emotion for the specification of expressivity in ECAs
Lecture Notes in Computer Science
Accurate Visible Speech Synthesis Based on Concatenating Variable Length Motion Capture Data
IEEE Transactions on Visualization and Computer Graphics
Animating blendshape faces by cross-mapping motion capture data
I3D '06 Proceedings of the 2006 symposium on Interactive 3D graphics and games
Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces
IEEE Transactions on Visualization and Computer Graphics
Transferring of Speech Movements from Video to 3D Face Space
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Pattern Analysis and Machine Intelligence
eFASE: expressive facial animation synthesis and editing with phoneme-isomap controls
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Simulating speech with a physics-based facial muscle model
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia
Perceptually guided expressive facial animation
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Speech-Driven Facial Animation Using a Shared Gaussian Process Latent Variable Model
ISVC '09 Proceedings of the 5th International Symposium on Advances in Visual Computing: Part I
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Effect of emotion and articulation of speech on the uncanny valley in virtual characters
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Vision based speech animation transferring with underlying anatomical structure
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part I
Lip-synced character speech animation with dominated animeme models
SIGGRAPH Asia 2012 Technical Briefs
Dynamic units of visual speech
EUROSCA'12 Proceedings of the 11th ACM SIGGRAPH / Eurographics conference on Computer Animation
Dynamic units of visual speech
Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Hi-index | 0.00 |
Data-driven approaches have been successfully used for realistic visual speech synthesis. However, little effort has been devoted to real-time lip-synching for interactive applications. In particular, algorithms that are based on a graph of motions are notorious for their exponential complexity. In this paper, we present a greedy graph search algorithm that yields vastly superior performance and allows real-time motion synthesis from a large database of motions. The time complexity of the algorithm is linear with respect to the size of an input utterance. In our experiments, the synthesis time for an input sentence of average length is under a second.