Graph-based distributed cooperative navigation for a general multi-robot measurement model

  • Authors:
  • Vadim Indelman;Pini Gurfil;Ehud Rivlin;Hector Rotstein

  • Affiliations:
  • College of Computing, Georgia Institute of Technology, Atlanta, GA, USA;Faculty of Aerospace Engineering, Technion - Israel Institute of Technology, Haifa, Israel;Department of Computer Science, Technion - Israel Institute of Technology, Haifa, Israel;RAFAEL - Advanced Defense Systems Limited, Israel

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cooperative navigation (CN) enables a group of cooperative robots to reduce their individual navigation errors. For a general multi-robot (MR) measurement model that involves both inertial navigation data and other onboard sensor readings, taken at different time instances, the various sources of information become correlated. Thus, this correlation should be solved for in the process of information fusion to obtain consistent state estimation. The common approach for obtaining the correlation terms is to maintain an augmented covariance matrix. This method would work for relative pose measurements, but is impractical for a general MR measurement model, because the identities of the robots involved in generating the measurements, as well as the measurement time instances, are unknown a priori. In the current work, a new consistent information fusion method for a general MR measurement model is developed. The proposed approach relies on graph theory. It enables explicit on-demand calculation of the required correlation terms. The graph is locally maintained by every robot in the group, representing all of the MR measurement updates. The developed method calculates the correlation terms in the most general scenarios of MR measurements while properly handling the involved process and measurement noise. A theoretical example and a statistical study are provided, demonstrating the performance of the method for vision-aided navigation based on a three-view measurement model. The method is compared, in a simulated environment, with a fixed-lag centralized smoothing approach. The method is also validated in an experiment that involved real imagery and navigation data. Computational complexity estimates show that the newly developed method is computationally efficient.