A sparse representation method of bimodal biometrics and palmprint recognition experiments

  • Authors:
  • Yong Xu;Zizhu Fan;Minna Qiu;David Zhang;Jing-Yu Yang

  • Affiliations:
  • Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China and Key Laboratory of Network Oriented Intelligent Computation, Shenzhen, China;Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China and School of Basic Science, East China Jiaotong University, Nanchang, Jiangxi, China;Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China;Biometrics Research Centre, Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong;School of Computer Science & Technology, Nanjing University of Science & Technology, Nanjing, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we propose a sparse representation method for bimodal biometrics. The proposed method first accomplishes the feature level fusion by combining the samples of the two biometric traits into a real vector in advance. This method then considers that an approximate representation of the test sample might be more useful for classification and uses the approximate representation to classify the test sample. The proposed method exploits a weighted sum of the neighbors from the set of training samples of the test sample to produce the approximate representation of the test sample and bases on this representation to perform classification. A variety of experiments demonstrate that the proposed approximate representation enables us to achieve a higher accuracy. The proposed method has the following reasonable assumption: the test sample is probably from one of the classes which the neighbors of the test sample are from. In this paper, we also formally show the difference between the proposed method and conventional appearance-based methods, and demonstrate that the proposed method is able to more accurately represent the test sample than conventional appearance-based methods.