Proceedings of the 26th annual conference on Computer graphics and interactive techniques
I3D '01 Proceedings of the 2001 symposium on Interactive 3D graphics
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
An example-based approach for facial expression cloning
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
The space of human body shapes: reconstruction and parameterization from range scans
ACM SIGGRAPH 2003 Papers
Facial expression recognition from video sequences: temporal and static modeling
Computer Vision and Image Understanding - Special issue on Face recognition
Spacetime faces: high resolution capture for modeling and animation
ACM SIGGRAPH 2004 Papers
Sketching articulation and pose for facial animation
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Face poser: interactive modeling of 3D facial expressions using model priors
SCA '07 Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation
Stylized synthesis of facial speech motions
Computer Animation and Virtual Worlds - CASA 2007
Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
To model a detailed 3D expressive face based on the limited user constraints is a challenge work. In this paper, we present the facial expression editing technique based on a dynamic graph model. The probabilistic relations between facial expressions and the complex combination of local facial features, as well as the temporal behaviors of facial expressions are represented by the hierarchical dynamic Bayesian network. Given limited user-constraints on the sparse feature mesh, the system can infer the basis expression probabilities, which are used to locate the corresponding expressive mesh in the shape space spanned by the basis models. The experiments demonstrate the 3D dense facial meshes corresponding to the user-constraints can be synthesized effectively.