Vision-IMU integration using a slow-frame-rate monocular vision system in an actual roadway setting

  • Authors:
  • Duminda I. B. Randeniya;Sudeep Sarkar;Manjriker Gunaratne

  • Affiliations:
  • Oak Ridge National Laboratory, Oak Ridge, TN;Department of Computer Science and Engineering, University of South Florida, Tampa, FL;Department of Civil Engineering, University of South Florida, Tampa, FL

  • Venue:
  • IEEE Transactions on Intelligent Transportation Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present results of an effort where position and orientation data from vision and inertial sensors are integrated and validated using data from an actual roadway. Information from a sequence of images, which were captured by a monocular camera attached to a survey vehicle at a maximum frequency of 3 frames/s, is fused with position and orientation estimates from the inertial system to correct for inherent error accumulation in such integral-based systems. The rotations and translations are estimated from point correspondences tracked through a sequence of images. To reduce unsuitable correspondences, we used constraints such as epipolar lines and correspondence flow directions. The vision algorithm automatically operates and involves the identification of point correspondences, the pruning of correspondences, and the estimation of motion parameters. To simply obtain the geodetic coordinates, i.e., latitude, longitude, and altitude, from the translation-direction estimates from the vision sensor, we expand the Kalman filter space to incorporate distance. Hence, it was possible to extract the translational vector from the available translational direction estimate of the vision system. Finally, a decentralized Kalman filter is used to integrate the position estimates based on the vision sensor with those of the inertial system. The fusion of the two sensors was carried out at the system level in the model. The comparison of integrated vision-inertial-measuring-unit (IMU) position estimates with those from inertial-GPS system output and actual survey demonstrates that vision sensing can be used to reduce errors in inertial measurements during potential GPS outages.