Visual Integration from Multiple Cameras

  • Authors:
  • Zhonghao Yang;Aaron Bobick

  • Affiliations:
  • Georgia Institute of Technology, Atlanta;Georgia Institute of Technology, Atlanta

  • Venue:
  • WACV-MOTION '05 Proceedings of the Seventh IEEE Workshops on Application of Computer Vision (WACV/MOTION'05) - Volume 1 - Volume 01
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

Multi-target visual tracking is a difficult problem in both academic and engineering aspects due to its inherent ambiguity in perspective projection and multi-target management. The paper will introduce an improved algorithm to integrate visual cues from multi-camera observations. The geometrical constraints from the overlapping camera views, coupled with temporal smoothness constraints, enabled us to achieve improved robustness and accuracy. Dynamic targets entering/exiting the workspace are handled as each target's confidence level accumulates/deteriorates over time, eliminating any cumbersome definitions of workspace borders. The output of our algorithm will be a set of target location observations, and a simple nearest-neighbor tracker is applied to enforce labeling consistency. This paper will present our algorithmic improvement, which results in real-time performance and reasonable accuracy in practical cases, as well as discuss how our approach provides improved performance in real-world complex scenarios with multiple constraints combined.