Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

  • Authors:
  • Dong Yi;Shengcai Liao;Zhen Lei;Jitao Sang;Stan Z. Li

  • Affiliations:
  • Center for Biometrics and Security Research, Institute of Automation, Chinese Academy of Sciences, Beijing, China 100190;Center for Biometrics and Security Research, Institute of Automation, Chinese Academy of Sciences, Beijing, China 100190;Center for Biometrics and Security Research, Institute of Automation, Chinese Academy of Sciences, Beijing, China 100190;Center for Biometrics and Security Research, Institute of Automation, Chinese Academy of Sciences, Beijing, China 100190;Center for Biometrics and Security Research, Institute of Automation, Chinese Academy of Sciences, Beijing, China 100190

  • Venue:
  • ICB '09 Proceedings of the Third International Conference on Advances in Biometrics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The latest multi-biometric grand challenge (MBGC 2008) sets up a new experiment in which near infrared (NIR) face videos containing partial faces are used as a probe set and the visual (VIS) images of full faces are used as the target set. This is challenging for two reasons: (1) it has to deal with partially occluded faces in the NIR videos, and (2) the matching is between heterogeneous NIR and VIS faces. Partial face matching is also a problem often confronted in many video based face biometric applications. In this paper, we propose a novel approach for solving this challenging problem. For partial face matching, we propose a local patch based method to deal with partial face data. For heterogeneous face matching, we propose the philosophy of enhancing common features in heterogeneous images while reducing differences. This is realized by using edge-enhancing filters, which at the same time is also beneficial for partial face matching. The approach requires neither learning procedures nor training data. Experiments are performed using the MBGC portal challenge data, comparing with several known state-of-the-arts methods. Extensive results show that the proposed approach, without knowing statistical characteristics of the subjects or data, outperforms the methods of contrast significantly, with ten-fold higher verification rates at FAR of 0.1%.