Vision based data fusion for autonomous vehicles target tracking using interacting multiple dynamic models

  • Authors:
  • Zhen Jia;Arjuna Balasuriya;Subhash Challa

  • Affiliations:
  • United Technologies Research Center, Shanghai, 201206, P.R. China;Department of Mechanical Engineering, MIT, Cambridge, MA 02139, USA;Information and Communication Group, Faculty of Engineering, The University of Technology, Sydney, Australia

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a novel algorithm is proposed for the vision-based object tracking by autonomous vehicles. To estimate the velocity of the tracked object, the algorithm fuses the information captured by the vehicle's on-board sensors such as the cameras and inertial motion sensors. Optical flow vectors, color features, stereo pair disparities are used as optical features while the vehicle's inertial measurements are used to determine the cameras' motion. The algorithm determines the velocity and position of the target in the world coordinate which are then tracked by the vehicle. In order to formulate this tracking algorithm, it is necessary to use a proper model which describes the dynamic information of the tracked object. However due to the complex nature of the moving object, it is necessary to have robust and adaptive dynamic models. Here, several simple and basic linear dynamic models are selected and combined to approximate the unpredictable, complex or highly nonlinear dynamic properties of the moving target. With these basic linear dynamic models, a detailed description of the three-dimensional (3D) target tracking scheme using the Interacting Multiple Models (IMM) along with an Extended Kalman Filter is presented. The final state of the target is estimated as a weighted combination of the outputs from each different dynamic model. Performance of the proposed fusion based IMM tracking algorithm is demonstrated through extensive experimental results.