Performance-driven hand-drawn animation
NPAR '00 Proceedings of the 1st international symposium on Non-photorealistic animation and rendering
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Animated deformations with radial basis functions
VRST '00 Proceedings of the ACM symposium on Virtual reality software and technology
Extraction of Visual Features for Lipreading
IEEE Transactions on Pattern Analysis and Machine Intelligence
Realistic mouth synthesis based on shape appearance dependence mapping
Pattern Recognition Letters
HUC '99 Proceedings of the 1st international symposium on Handheld and Ubiquitous Computing
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
Automatic determination of facial muscle activations from sparse motion capture marker data
ACM SIGGRAPH 2005 Papers
ACM SIGGRAPH 2006 Courses
Performance-driven hand-drawn animation
ACM SIGGRAPH 2006 Courses
Simulating speech with a physics-based facial muscle model
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Statistical lip-appearance models trained automatically using audio information
EURASIP Journal on Applied Signal Processing
Realistic facial modeling and animation based on high resolution capture
ACIVS'07 Proceedings of the 9th international conference on Advanced concepts for intelligent vision systems
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on gait analysis
Online expression mapping for performance-driven facial animation
ICEC'07 Proceedings of the 6th international conference on Entertainment Computing
Hi-index | 0.00 |
We address the problem of tracking and reconstructing 3D human lip motions from a 2D view. This problem is challenging due both to the complex nature of lip motions and minimal data available from a raw video stream of the face. We counter both of these difficulties with statistical approaches. We first build a physically-based 3D model of lips and train it to cover only the subspace of lip motions. We then track this model in video by finding the shape within the subspace that maximizes the posterior probability of the model given the observed features. In this study, the features are the likelihoods of the lip and non-lip color classes: we iteratively derive forces from these values to apply to the physical model and converge to the final solution. Because of the full 3D nature of the model, this framework allows us to track the lips from any head pose. In addition, because of the constraints imposed by the learned subspace of the model, we are able to accurately estimate the full 3D lip shape from the 2D view.