Multimodal authentication using asynchronous HMMs

  • Authors:
  • Samy Bengio

  • Affiliations:
  • Dalle Molle Institute for Perceptual Artificial Intelligence, IDIAP, Martigny, Switzerland

  • Venue:
  • AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

It has often been shown that using multiple modalities to authenticate the identity of a person is more robust than using only one. Various combination techniques exist and are often performed at the level of the output scores of each modality system. In this paper, we present a novel HMM architecture able to model the joint probability distribution of pairs of asynchronous sequences (such as speech and video streams) describing the same event. We show how this model can be used for audio-visual person authentication. Results on the M2VTS database show robust performances of the system under various audio noise conditions, when compared to other state-of-the-art techniques.