Tracking with depth-from-size

  • Authors:
  • Chen Zhang;Volker Willert;Julian Eggert

  • Affiliations:
  • Darmstadt University of Technology, Darmstadt, Germany;Honda Research Institute Europe GmbH, Offenbach, Germany;Honda Research Institute Europe GmbH, Offenbach, Germany

  • Venue:
  • ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tracking an object in depth is an important task, since the distance to an object often correlates with an imminent danger, e.g. in the case of an approaching vehicle. A common way to estimate the depth of a tracked object is to utilize binocular methods like stereo disparity. In practice, however, depth measurement using binocular methods is technically expensive due to the need of camera calibration and rectification. In addition, higher depths are difficult to estimate because of an inverse relationship between disparity and depth. In this paper a new approach for depth estimation, depth-from-sizes (DFS), is introduced. We present a human-inspired monocular method where the depth, the physical size and the retinal size of the object are estimated in a mutually interdependent manner. For each of the three terms specific measurement and estimation methods are probabilistically combined. In two evaluation scenarios it is shown that this approach is a reliable alternative to the standard stereo disparity approach for depth estimation with several advantages: 1) simultaneous estimation of depth, physical size and retinal size; 2) no stereo camera calibration and rectification; 3) good depth estimation at higher depth ranges.