Multimodal Speaker Detection Using Input/Output Dynamic Bayesian Networks
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
A Graphical Model for Audiovisual Object Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Supporting timeliness and accuracy in distributed real-time content-based video analysis
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Parallel hypothesis driven video content analysis
Proceedings of the 2004 ACM symposium on Applied computing
Affective multimodal human-computer interaction
Proceedings of the 13th annual ACM international conference on Multimedia
Real-time video content analysis: QoS-aware application composition and parallel processing
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Temporal Bayesian Network based contextual framework for structured information mining
Pattern Recognition Letters
International Journal of Wireless and Mobile Computing
Hi-index | 0.00 |
The development of human-computer interfaces poses a challenging problem: actions and intentions of different users have to be inferred from sequences of noisy and ambiguous sensory data. Temporal fusion of multiple sensors can be efficiently formulated using dynamic Bayesian networks (DBNs). DBN framework allows the power of statistical inference and learning to be combined with contextual knowledge of the problem. We demonstrate the use of DBNs in tackling the problem of audio/visual speaker detection. "Off-the-shelf" visual and audio sensors (face, skin, texture, mouth motion, and silence detectors) are optimally fused along with contextual information in a DBN architecture that infers instances when an individual is speaking. Results obtained in the setup of an actual human-machine interaction system (Genie Casino Kiosk) demonstrate superiority of our approach over that of static, context-free fusion architecture.