Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Probabilistic Visual Learning for Object Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear Object Classes and Image Synthesis From a Single Example Image
IEEE Transactions on Pattern Analysis and Machine Intelligence
EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation
International Journal of Computer Vision
A morphable model for the synthesis of 3D faces
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Robust recognition using eigenimages
Computer Vision and Image Understanding - Special issue on robusst statistical techniques in image understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence
Reconstruction of Partially Damaged Face Images Based on a Morphable Face Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Based on Fitting a 3D Morphable Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
The CMU Pose, Illumination, and Expression Database
IEEE Transactions on Pattern Analysis and Machine Intelligence
Appearance-Based Face Recognition and Light-Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Based on Frontal Views Generated from Non-Frontal Images
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Reconstruction, registration, and modeling of deformable object shapes
Reconstruction, registration, and modeling of deformable object shapes
Regression based automatic face annotation for deformable model building
Pattern Recognition
Hi-index | 0.00 |
Pose variations, especially large out-of-plane rotations, make face recognition a difficult problem. In this paper, we propose an algorithm that uses a single input image to accurately synthesize an image of the person in a different pose. We represent the two poses by stacking their information (pixels or feature locations) in a combined feature space. A given test vector will consist of a known part corresponding to the input image and a missing part corresponding to the synthesized image. We then solve for the missing part by maximizing the test vector’s probability. This approach combines the “distance-from-feature-space” and “distance-in-feature-space”, and maximizes the test vector’s probability by minimizing a weighted sum of these two distances. Our approach does not require either 3D training data or a 3D model, and does not require correspondence between different poses. The algorithm is computationally efficient, and only takes 4 – 5 seconds to generate a face. Experimental results show that our approach produces more accurate results than the commonly used linear-object-class approach. Such technique can help face recognition to overcome the pose variation problem.