A multiresolution spline with application to image mosaics
ACM Transactions on Graphics (TOG)
Free-form deformation of solid geometric models
SIGGRAPH '86 Proceedings of the 13th annual conference on Computer graphics and interactive techniques
ACM Transactions on Graphics (TOG)
Surfaces over Dirichlet Tessellations
Computer Aided Geometric Design
Voronoi diagrams—a survey of a fundamental geometric data structure
ACM Computing Surveys (CSUR)
Realistic modeling for facial animation
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Image metamorphosis using snakes and free-form deformations
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
Reading between the lines—a method for extracting dynamic 3D with texture
VRST '97 Proceedings of the ACM symposium on Virtual reality software and technology
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
An anthropometric face model using variational techniques
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Synthesizing realistic facial expressions from photographs
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Automatic Creation of 3D Facial Models
IEEE Computer Graphics and Applications
Face Models from Uncalibrated Video Sequences
CAPTECH '98 Proceedings of the International Workshop on Modelling and Motion Capture Techniques for Virtual Environments
Dirichlet Free-Form Deformations and their Application to Hand Simulation
CA '97 Proceedings of the Computer Animation
Hi-index | 0.00 |
This paper describes a method for cloning faces from two orthogonal pictures and for generating populations from a small number of these clones. An efficient method for reconstructing 3D heads suitable for animation from pictures starts with the extraction of feature points from the orthogonal picture sets. Data from several such heads serve to statistically infer the parameters of the multivariate probability distribution characterizing a hypothetical population of heads. A previously constructed, animation-ready generic model is transformed to each individualized head based on the features either extracted from the orthogonal pictures or determined by a sample point from the multivariate distribution. Using projections of the 3D heads, 2D texture images are obtained for individuals reconstructed from pictures, which are then fitted to the clone, a fully automated procedure resulting in 360° texture mapping. For heads generated through population sampling, a texture morphing algorithm generates new texture mappings.