Context-aware fusion: A case study on fusion of gait and face for human identification in video

  • Authors:
  • Xin Geng;Kate Smith-Miles;Liang Wang;Ming Li;Qiang Wu

  • Affiliations:
  • School of Computer Science and Engineering, Southeast University, Nanjing 210096, China and School of Mathematical Sciences, Monash University, VIC 3800, Australia and National Key Lab for Novel S ...;School of Mathematical Sciences, Monash University, VIC 3800, Australia;Department of Computer Science, University of Bath, BA2 7AY, UK;School of Information Technology, Deakin University, VIC 3125, Australia;School of Computing and Communications, University of Technology, Sydney, NSW 2007, Australia

  • Venue:
  • Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Most work on multi-biometric fusion is based on static fusion rules. One prominent limitation of static fusion is that it cannot respond to the changes of the environment or the individual users. This paper proposes context-aware multi-biometric fusion, which can dynamically adapt the fusion rules to the real-time context. As a typical application, the context-aware fusion of gait and face for human identification in video is investigated. Two significant context factors that may affect the relationship between gait and face in the fusion are considered, i.e., view angle and subject-to-camera distance. Fusion methods adaptable to these two factors based on either prior knowledge or machine learning are proposed and tested. Experimental results show that the context-aware fusion methods perform significantly better than not only the individual biometric traits, but also those widely adopted static fusion rules including SUM, PRODUCT, MIN, and MAX. Moreover, context-aware fusion based on machine learning shows superiority over that based on prior knowledge.