Joint sparsity-based robust multimodal biometrics recognition

  • Authors:
  • Sumit Shekhar;Vishal M. Patel;Nasser M. Nasrabadi;Rama Chellappa

  • Affiliations:
  • University of Maryland, College Park;University of Maryland, College Park;Army Research Lab, Adelphi;University of Maryland, College Park

  • Venue:
  • ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a novel multimodal multivariate sparse representation method for multimodal biometrics recognition, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information between biometric modalities. Furthermore, the model is modified to make it robust to noise and occlusion. The resulting optimization problem is solved using an efficient alternative direction method. Experiments on a challenging public dataset show that our method compares favorably with competing fusion-based methods.