Expressive expression mapping with ratio images
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Multilinear Analysis of Image Ensembles: TensorFaces
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Geometry-driven photorealistic facial expression synthesis
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Vision-based control of 3D facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Facial Expression Decomposition
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Deformation transfer for triangle meshes
ACM SIGGRAPH 2004 Papers
SCAPE: shape completion and animation of people
ACM SIGGRAPH 2005 Papers
Face transfer with multilinear models
ACM SIGGRAPH 2005 Papers
Separating Style and Content with Bilinear Models
Neural Computation
Model-Based Face De-Identification
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Protecting Privacy in Video Surveillance
Protecting Privacy in Video Surveillance
Bilinear kernel reduced rank regression for facial expression synthesis
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
Linear Facial Expression Transfer with Active Appearance Models
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
3D shape regression for real-time facial animation
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
Hi-index | 0.00 |
Facial Action Transfer (FAT) has recently attracted much attention in computer vision due to its diverse applications in the movie industry, computer games, and privacy protection. The goal of FAT is to "clone" the facial actions from the videos of one person (source) to another person (target). In this paper, we will assume that we have a video of the source person but only one frontal image of the target person. Most successful methods for FAT require a training set with annotated correspondence between expressions of different subjects, sometimes including many images of the target subject. However, labeling expressions is time consuming and error prone (i.e., it is difficult to capture the same intensity of the expression across people). Moreover, in many applications it is not realistic to have many labeled images of the target. This paper proposes a method to learn a personalized facial model, that can produce photo-realistic person-specific facial actions (e.g., synthesize wrinkles for smiling), from only a neutral image of the target person. More importantly, our learning method does not need an explicit correspondence of expressions across subjects. Experiments on the Cohn-Kanade and the RU-FACS databases show the effectiveness of our approach to generate video-realistic images of the target person driven by spontaneous facial actions of the source. Moreover, we illustrate applications of FAT to face de-identification.