Feature-Level Fusion in Personal Identification

  • Authors:
  • Yongsheng Gao;Michael Maggs

  • Affiliations:
  • Griffith University;Griffith University

  • Venue:
  • CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The existing studies of multi-modal and multi-view personal identification focused on combining the outputs of multiple classifiers at the decision level. In this study, we investigated the fusion at the feature level to combine multiple views and modals in personal identification. A new similarity measure is proposed, which integrates multiple 2-D view features representing a visual identity of a 3-D object seen from different viewpoints and from different sensors. The robustness to non-rigid distortions is achieved by the proximity correspondence manner in the similarity computation. The feasibility and capability of the proposed technique for personal identification were evaluated on multiple view human faces and palmprints. This research demonstrates that the feature-level fusion provides a new way to combine multiple modals and views for personal identification.