Active shape models—their training and application
Computer Vision and Image Understanding
A morphable model for the synthesis of 3D faces
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
An example-based approach for facial expression cloning
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
The CMU Pose, Illumination, and Expression Database
IEEE Transactions on Pattern Analysis and Machine Intelligence
A feature-based approach to facial expression cloning: Virtual Humans and Social Agents
Computer Animation and Virtual Worlds - CASA 2005
Synthesizing realistic facial expressions from photographs
ACM SIGGRAPH 2006 Courses
Initialization and Pose Alignment in Active Shape Model
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
3D Face Reconstruction from a Single Image Using a Single Reference Face Shape
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
In this paper, we present a system for synthesizing 3D human face models containing different expressions from a single facial image. Given a frontal image of the target face with neutral expression, we first detect several key points denoting the shape of the face by Active Shape Model (ASM). Then we apply a RBF-based scattered data interpolation to reconstruct a 3D target face using a neutral expression 3D face model as reference. By analyzing a series of 3D expression face model, we segment the 3D reference model into regions automatically that each region is correspondent to a facial organ. From the expression set we construct a motion model for each facial action with respect to the target face in a local consistent manner. At last, the reconstructed 3D target face model with neutral expression and the facial action motion model are combined to generate 3D target face of various expressions. There are 3 contributions of our work: (1) We employ a set of registered 3D facial expression models as input, which enabled us to generate more complex and visual-realistic expressions than other parameter-based approaches and 2D image-based methods. (2) On the basis of a clustering-based segmentation, we developed a localized linear expression model, which make it possible for us to generate different facial expressions both locally and globally, thusly enlarge the space of synthesize output and break the limitation by the limited scale of the input expression model set. (3) A local space transform procedure is included that the output expression can fit distinct facial shapes regardless of the scarcity of variation of the facial shapes (fat or thin) in the input model set.