Multi-Modal Face Tracking Using Bayesian Network

  • Authors:
  • Fang Liu;Xueyin Lin;Stan Z. Li;Yuanchun Shi

  • Affiliations:
  • -;-;-;-

  • Venue:
  • AMFG '03 Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a Bayesian network based multi-modalfusionmethod for robust and real-time face tracking. The Bayesian networkintegrates a prior of second order system dynamics, and thelikelihood cues from color, edge and face appearance. Whiledifferent modalities have different confidence scales, we encodethe environmental factors related to the confidences of modalitiesinto the Bayesian network, and develop a Fisher discriminantanalysis method for learning optimal fusion. The face tracker maytrack multiple faces under different poses. It is made up of twostages. First hypotheses are efficiently generated using acoarse-to-fine strategy; then multiple modalities are integrated inthe Bayesian network to evaluate the posterior of each hypothesis.The hypothesis that maximizes a posterior(MAP) is selected as theestimate of the object state. Experimental results demonstrate therobustness andreal-time performance of our face tracking approach.