Multi-Camera Tracking with Adaptive Resource Allocation

  • Authors:
  • Bohyung Han;Seong-Wook Joo;Larry S. Davis

  • Affiliations:
  • Dept. of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Korea;Google Inc., Mountain View, USA;Dept. of Computer Science, University of Maryland, College Park, USA

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sensor fusion for object tracking is attractive since the integration of multiple sensors and/or algorithms with different characteristics can improve performance. However, there exist several critical limitations to sensor fusion techniques: (1) the measurement cost increases typically as many times as the number of sensors, (2) it is not straightforward to measure the confidence of each source and give it a proper weight for state estimation, and (3) there is no principled dynamic resource allocation algorithm for better performance and efficiency. We describe a method to fuse information from multiple sensors and estimate the current tracker state by using a mixture of sequential Bayesian filters (e.g., particle filter)--one filter for each sensor, where each filter makes a different level of contribution to estimate the combined posterior in a reliable manner. In this framework, multiple sensors interact to determine an appropriate sensor for each particle dynamically; each particle is allocated to only one of the sensors for measurement and a different number of particles is assigned to each sensor. The level of the contribution of each sensor changes dynamically based on its prior information and relative measurement confidence. We apply this technique to visual tracking with multiple cameras, and demonstrate its effectiveness through tracking results in videos.