Hand-based verification and identification using palm-finger segmentation and fusion

  • Authors:
  • Gholamreza Amayeh;George Bebis;Ali Erol;Mircea Nicolescu

  • Affiliations:
  • Computer Vision Laboratory, University of Nevada, Reno 89557, United States;Computer Vision Laboratory, University of Nevada, Reno 89557, United States;Computer Vision Laboratory, University of Nevada, Reno 89557, United States;Computer Vision Laboratory, University of Nevada, Reno 89557, United States

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hand-based verification/identification represent a key biometric technology with a wide range of potential applications both in industry and government. Traditionally, hand-based verification and identification systems exploit information from the whole hand for authentication or recognition purposes. To account for hand and finger motion, guidance pegs are used to fix the position and orientation of the hand. In this paper, we propose a component-based approach to hand-based verification and identification which improves both accuracy and robustness as well as ease of use due to avoiding pegs. Our approach accounts for hand and finger motion by decomposing the hand silhouette in different regions corresponding to the back of the palm and the fingers. To improve accuracy and robustness, verification/recognition is performed by fusing information from different parts of the hand. The proposed approach operates on 2D images acquired by placing the hand on a flat lighting table and does not require using guidance pegs or extracting any landmark points on the hand. To decompose the silhouette of the hand in different regions, we have devised a robust methodology based on an iterative morphological filtering scheme. To capture the geometry of the back of the palm and the fingers, we employ region descriptors based on high-order Zernike moments which are computed using an efficient methodology. The proposed approach has been evaluated both for verification and recognition purposes on a database of 101 subjects with 10 images per subject, illustrating high accuracy and robustness. Comparisons with related approaches involving the use of the whole hand or different parts of the hand illustrate the superiority of the proposed approach. Qualitative and quantitative comparisons with state-of-the-art approaches indicate that the proposed approach has comparable or better accuracy.