Fusion of Multiple Camera Views for Kernel-Based 3D Tracking

  • Authors:
  • Ambrish Tyagi;Gerasimos Potamianos;James W. Davis;Stephen M. Chu

  • Affiliations:
  • Ohio State University;IBM T.J. Watson Research Center, Yorktown Heights, NY;Ohio State University;IBM T.J. Watson Research Center, Yorktown Heights, NY

  • Venue:
  • WMVC '07 Proceedings of the IEEE Workshop on Motion and Video Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a computer vision system to robustly track an object in 3D by combining evidence from multiple calibrated cameras. Its novelty lies in the proposed unified approach to 3D kernel based tracking, that amounts to fusing the appearance features from all available camera sensors, as opposed to tracking the object appearance in the individual 2D views and fusing the results. The elegance of the method resides in its inherent ability to handle problems encountered by various 2D trackers, including scale selection, occlusion, view-dependence, and correspondence across different views. We apply the method on the CHIL project database for tracking the presenter's head during lectures inside smart rooms equipped with four calibrated cameras. As compared to traditional 2D based mean shift tracking approaches, the proposed algorithm results in 35% relative reduction in overall 3D tracking error and a 70% reduction in the number of tracker re-initializations.