Dynamic context for tracking behind occlusions

  • Authors:
  • Fei Xiong;Octavia I. Camps;Mario Sznaier

  • Affiliations:
  • Dept. of Electrical and Computer Engineering, Northeastern University, Boston, MA;Dept. of Electrical and Computer Engineering, Northeastern University, Boston, MA;Dept. of Electrical and Computer Engineering, Northeastern University, Boston, MA

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tracking objects in the presence of clutter and occlusion remains a challenging problem. Current approaches often rely on a priori target dynamics and/or use nearly rigid image context to determine the target position. In this paper, a novel algorithm is proposed to estimate the location of a target while it is hidden due to occlusion. The main idea behind the algorithm is to use contextual dynamical cues from multiple supporter features which may move with the target, move independently of the target, or remain stationary. These dynamical cues are learned directly from the data without making prior assumptions about the motions of the target and/or the support features. As illustrated through several experiments, the proposed algorithm outperforms state of the art approaches under long occlusions and severe camera motion.