Audio-Visual Speaker Detection Using Dynamic Bayesian Networks

  • Authors:
  • Ashutosh Garg;Vladimir Pavlovic;James M. Rehg

  • Affiliations:
  • -;-;-

  • Venue:
  • FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

The development of human-computer interfaces poses a challenging problem: actions and intentions of different users have to be inferred from sequences of noisy and ambiguous sensory data. Temporal fusion of multiple sensors can be efficiently formulated using dynamic Bayesian networks (DBNs). DBN framework allows the power of statistical inference and learning to be combined with contextual knowledge of the problem. We demonstrate the use of DBNs in tackling the problem of audio/visual speaker detection. "Off-the-shelf" visual and audio sensors (face, skin, texture, mouth motion, and silence detectors) are optimally fused along with contextual information in a DBN architecture that infers instances when an individual is speaking. Results obtained in the setup of an actual human-machine interaction system (Genie Casino Kiosk) demonstrate superiority of our approach over that of static, context-free fusion architecture.