From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Joint dynamic sparse representation for multi-view face recognition
Pattern Recognition
Face recognition using the nearest feature line method
IEEE Transactions on Neural Networks
-NS: A Classifier by the Distance to the Nearest Subspace
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, we will introduce three frameworks for multimodal biometric using sparse representation based classification (SRC), which has been successfully used in many classification tasks recently. The first framework is multimodal SRC at match score level (MSRC_s), in which feature of each modality is sparsely coded independently, and then their representation fidelities are used as match scores for multimodal classification. The other two frameworks are multimodal SRC at feature level (MSRC_f1, MSRC_f2), where features of all modalities are first fused and then classified by using SRC. The difference between them is that MSRC_f1 fuses the features to form a unique multimodal feature vector, while MSRC_f2 implicitly combines the features in an iterative joint sparse coding process. As a typical application, the fusion of face and ear for human identification is investigated by using the three frameworks. In our experiments, Principal Component Analysis (PCA) based feature extraction is applied. Many results demonstrate that the proposed multimodal methods are significantly better than the multimodal recognition using common classifiers. Among the SRC based methods, MSRC_s gets the top recognition accuracy in almost all the test items, which might benefit from allowing sparse coding independence for different modalities.