Frameworks for multimodal biometric using sparse coding

  • Authors:
  • Zengxi Huang;Yiguang Liu;Ronggang Huang;Menglong Yang

  • Affiliations:
  • Vision and Image Processing Laboratory, College of Computer Science, Sichuan University, Chengdu, P.R. China;Vision and Image Processing Laboratory, College of Computer Science, Sichuan University, Chengdu, P.R. China;Vision and Image Processing Laboratory, College of Computer Science, Sichuan University, Chengdu, P.R. China;Vision and Image Processing Laboratory, College of Computer Science, Sichuan University, Chengdu, P.R. China

  • Venue:
  • IScIDE'12 Proceedings of the third Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we will introduce three frameworks for multimodal biometric using sparse representation based classification (SRC), which has been successfully used in many classification tasks recently. The first framework is multimodal SRC at match score level (MSRC_s), in which feature of each modality is sparsely coded independently, and then their representation fidelities are used as match scores for multimodal classification. The other two frameworks are multimodal SRC at feature level (MSRC_f1, MSRC_f2), where features of all modalities are first fused and then classified by using SRC. The difference between them is that MSRC_f1 fuses the features to form a unique multimodal feature vector, while MSRC_f2 implicitly combines the features in an iterative joint sparse coding process. As a typical application, the fusion of face and ear for human identification is investigated by using the three frameworks. In our experiments, Principal Component Analysis (PCA) based feature extraction is applied. Many results demonstrate that the proposed multimodal methods are significantly better than the multimodal recognition using common classifiers. Among the SRC based methods, MSRC_s gets the top recognition accuracy in almost all the test items, which might benefit from allowing sparse coding independence for different modalities.