Autonomous navigation of vehicles from a visual memory using a generic camera model

  • Authors:
  • Jonathan Courbon;Youcef Mezouar;Philippe Martinet

  • Affiliations:
  • Laboratoire des Sciences et Matériaux pour l'Electronique et d'Automatique, Aubiere, France and Centre dEtude Atomique, List, Fontenay Aux Roses, France;Blaise Pascal University, Clermont-Ferrand, France and Laboratoire des Sciences et Matériaux pour l'Electronique et d'Automatique, Aubiere, France;Laboratoire des Sciences et Matériaux pour l'Electronique et d'Automatique, Aubiere, France and Institut Francais de Mécanique Avancée, Aubiere, France

  • Venue:
  • IEEE Transactions on Intelligent Transportation Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a complete framework for autonomous vehicle navigation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, usual behavior consists of memorizing some key views along the performed path to use these references as checkpoints for future navigation missions. The navigation framework for the wheeled vehicles presented in this paper is based on this assumption. During a human-guided learning step, the vehicle performs paths that are sampled and stored as a set of ordered key images, as acquired by an embedded camera. The visual paths are topologically organized, providing a visual memory of the environment. Given an image of the visual memory as a target, the vehicle navigation mission is defined as a concatenation of visual path subsets called visual routes. When autonomously running, the control guides the vehicle along the reference visual route without explicitly planning any trajectory. The control consists of a vision-based control law that is adapted to the nonholonomic constraint. Our navigation framework has been designed for a generic class of cameras (including conventional, catadioptric, and fisheye cameras). Experiments with an urban electric vehicle navigating in an outdoor environment have been carried out with a fisheye camera along a 750-m-long trajectory. Results validate our approach.