Vision-aided inertial navigation for spacecraft entry, descent, and landing

  • Authors:
  • Anastasios I. Mourikis;Nikolas Trawny;Stergios I. Roumeliotis;Andrew E. Johnson;Adnan Ansar;Larry Matthies

  • Affiliations:
  • Department of Electrical Engineering, University of California, Riverside, CA;Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN;Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN;Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA;Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA;Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA

  • Venue:
  • IEEE Transactions on Robotics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present the vision-aided inertial navigation (VISINAV) algorithm that enables precision planetary landing. The vision front-end of the VISINAV system extracts 2-D-to- 3-D correspondences between descent images and a surface map (mapped landmarks), as well as 2-D-to-2-D feature tracks through a sequence of descent images (opportunistic features). An extended Kalman filter (EKF) tightly integrates both types of visual feature observations with measurements from an inertial measurement unit. The filter computes accurate estimates of the lander's terrain-relative position, attitude, and velocity, in a resource-adaptive and hence real-time capable fashion. In addition to the technical analysis of the algorithm, the paper presents validation results from a sounding-rocket test flight, showing estimation errors of only 0.16 m/s for velocity and 6.4 m for position at touchdown. These results vastly improve current state of the art for terminal descent navigation without visual updates, and meet the requirements of future planetary exploration missions.