A robust face and ear based multimodal biometric system using sparse representation

  • Authors:
  • Zengxi Huang;Yiguang Liu;Chunguang Li;Menglong Yang;Liping Chen

  • Affiliations:
  • Vision and Image Processing Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China;Vision and Image Processing Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China;Department of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310058, PR China;Vision and Image Processing Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China and School of Aerospace Science and Engineering, Sichuan University, Chengdu 61006 ...;Vision and Image Processing Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China and College of Information and Engineering, Tarim University, Alaer 84330, PR Chin ...

  • Venue:
  • Pattern Recognition
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

If fusion rules cannot adapt to the changes of environment and individual users, multimodal systems may perform worse than unimodal systems when one or more modalities encounter data degeneration. This paper develops a robust face and ear based multimodal biometric system using Sparse Representation (SR), which integrates the face and ear at feature level, and can effectively adjust the fusion rule based on reliability difference between the modalities. We first propose a novel index called Sparse Coding Error Ratio (SCER) to measure the reliability difference between face and ear query samples. Then, SCER is utilized to develop an adaptive feature weighting scheme for dynamically reducing the negative effect of the less reliable modality. In multimodal classification phase, SR-based classification techniques are employed, i.e., Sparse Representation based Classification (SRC) and Robust Sparse Coding (RSC). Finally, we derive a category of SR-based multimodal recognition methods, including Multimodal SRC with feature Weighting (MSRCW) and Multimodal RSC with feature Weighting (MRSCW). Experimental results demonstrate that: (a) MSRCW and MRSCW perform significantly better than the unimodal recognition using either face or ear alone, as well as the known multimodal methods; (b) The effectiveness of adaptive feature weighting is verified. MSRCW and MRSCW are very robust to the image degeneration occurring to one of the modalities. Even when face (ear) query sample suffers from 100% random pixel corruption, they can still get the performance close to the ear (face) unimodal recognition; (c) By integrating the advantages of adaptive feature weighting and sparsity-constrained regression, MRSCW seems excellent in tackling the face and ear based multimodal recognition problem.