Visual-inertial simultaneous localization, mapping and sensor-to-sensor self-calibration

  • Authors:
  • Jonathan Kelly;Gaurav S. Sukhatme

  • Affiliations:
  • Robotic Embedded Systems Laboratory, University of Southern California, Los Angeles, California;Robotic Embedded Systems Laboratory, University of Southern California, Los Angeles, California

  • Venue:
  • CIRA'09 Proceedings of the 8th IEEE international conference on Computational intelligence in robotics and automation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual and inertial sensors, in combination, are well-suited for many robot navigation and mapping tasks. However, correct data fusion, and hence overall system performance, depends on accurate calibration of the 6-DOF transform between the sensors (one or more camera(s) and an inertial measurement unit). Obtaining this calibration information is typically difficult and time-consuming. In this paper, we describe an algorithm, based on the unscented Kalman filter (UKF), for camera-IMU simultaneous localization, mapping and sensor relative pose self-calibration. We show that the sensor-to-sensor transform, the IMU gyroscope and accelerometer biases, the local gravity vector, and the metric scene structure can all be recovered from camera and IMU measurements alone. This is possible without any prior knowledge about the environment in which the robot is operating. We present results from experiments with a monocular camera and a low-cost solid-state IMU, which demonstrate accurate estimation of the calibration parameters and the local scene structure.