Fusing multimodal biometrics with quality estimates via a Bayesian belief network

  • Authors:
  • Donald E. Maurer;John P. Baker

  • Affiliations:
  • The Johns Hopkins University Applied Physics Laboratory, Laurel, MD 207236099, USA;The Johns Hopkins University Applied Physics Laboratory, Laurel, MD 207236099, USA

  • Venue:
  • Pattern Recognition
  • Year:
  • 2008

Quantified Score

Hi-index 0.02

Visualization

Abstract

Biometric systems for today's high security applications must meet stringent performance requirements; fusing multiple biometrics can help lower system error rates. Fusion methods include processing biometric modalities sequentially until an acceptable match is obtained, using logical (AND/OR) operations, or summing similarity scores. More sophisticated methods combine scores from separate classifiers for each modality. This paper develops a novel fusion architecture based on Bayesian belief networks. Although Bayesian update methods have been used before, our approach more fully exploits the graphical structure of Bayes nets to define and explicitly model statistical dependencies between relevant variables: per sample measurements, such as match scores and corresponding quality estimates, and global decision variables. These statistical dependencies are in the form of conditional distributions which we model as Gaussian, gamma, log-normal or beta, each of which is determined by its mean and variance, thus significantly reducing training data requirements. Moreover, by conditioning decision variables on quality as well as match score, we can extract information from lower quality measurements rather than rejecting them out of hand. Another feature of our method is a global quality measure designed to be used as a confidence estimate supporting decision making. Preliminary studies using the architecture to fuse fingerprints and voice are reported.