Applied multivariate statistical analysis
Applied multivariate statistical analysis
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face verification from 3D and grey level clues
Pattern Recognition Letters
Use of depth and colour eigenfaces for face recognition
Pattern Recognition Letters
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Integrating Range and Texture Information for 3D Face Recognition
WACV-MOTION '05 Proceedings of the Seventh IEEE Workshops on Application of Computer Vision (WACV/MOTION'05) - Volume 1 - Volume 01
An Evaluation of Multimodal 2D+3D Face Biometrics
IEEE Transactions on Pattern Analysis and Machine Intelligence
Strategies and Benefits of Fusion of 2D and 3D Face Recognition
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
2D and 3D face recognition: A survey
Pattern Recognition Letters
A survey of approaches and challenges in 3D and multi-modal 3D+2D face recognition
Computer Vision and Image Understanding
Face recognition from 2D and 3D images using 3D Gabor filters
Image and Vision Computing
Expression-invariant 3D face recognition
AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
Face localization and authentication using color and depth images
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Most of the existing multimodal 2D + 3D face recognition approaches do not account for the dependency between 2D and 3D representations of a face. This dependency reduces the benefit of fusion at the late-stage feature or metric level. On the other hand, it is advantageous to fuse at the early stage. We propose an image-level fusion method that explores the dependency between modalities for face recognition. Facial cues from 2D and 3D images are fused into more independent and discriminating data by finding fusion axes that pass through the most uncorrelated information in the images. Experimental results based on our face database of 1280 2D + 3D facial samples from 80 adults show that our image-level fusion approach outperforms the pixel- and metric-level fusion approaches.