An accurate and robust visual-compass algorithm for robot-mounted omnidirectional cameras

  • Authors:
  • Gian Luca Mariottini;Stefano Scheggi;Fabio Morbidi;Domenico Prattichizzo

  • Affiliations:
  • Department of Computer Science and Engineering, University of Texas at Arlington, Engineering Research Building, 500 UTA Boulevard, Arlington, TX 76019, USA;Department of Information Engineering, University of Siena, Via Roma 56, I-53100 Siena, Italy;Department of Information Engineering, University of Siena, Via Roma 56, I-53100 Siena, Italy;Department of Information Engineering, University of Siena, Via Roma 56, I-53100 Siena, Italy

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Due to their wide field of view, omnidirectional cameras are becoming ubiquitous in many mobile robotic applications. A challenging problem consists of using these sensors, mounted on mobile robotic platforms, as visual compasses (VCs) to provide an estimate of the rotational motion of the camera/robot from the omnidirectional video stream. Existing VC algorithms suffer from some practical limitations, since they require a precise knowledge either of the camera-calibration parameters, or the 3-D geometry of the observed scene. In this paper we present a novel multiple-view geometry constraint for paracatadioptric views of lines in 3-D, that we use to design a VC algorithm that does not require either the knowledge of the camera calibration parameters, or the 3-D scene geometry. In addition, our algorithm runs in real time since it relies on a closed-form estimate of the camera/robot rotation, and can address the image-feature correspondence problem. Extensive simulations and experiments with real robots have been performed to show the accuracy and robustness of the proposed method.