Visual Navigation in Natural Environments: From Range and Color Data to a Landmark-Based Model

  • Authors:
  • Rafael Murrieta-Cid;Carlos Parra;Michel Devy

  • Affiliations:
  • ITESM Campus Ciudad de México, Calle del puente 222, Tlalpan, México DF. rmurriet@campus.ccm.itesm.mx;Pontificia Universidad Javeriana, Cra 7 No 40-62 Bogotá D.C., Colombia. carlos.parra@javeriana.edu.co;Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS-CNRS), 7, Avenue du Colonel Roche, 31077 Toulouse Cedex 4, France. michel@laas.fr

  • Venue:
  • Autonomous Robots
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper concerns the exploration of a natural environment by a mobile robot equipped with both a video color camera and a stereo-vision system. We focus on the interest of such a multi-sensory system to deal with the navigation of a robot in an a priori unknown environment, including (1) the incremental construction of a landmark-based model, and the use of these landmarks for (2) the 3-D localization of the mobile robot and for (3) a sensor-based navigation mode.For robot localization, a slow process and a fast one are simultaneously executed during the robot motions. In the modeling process (currently 0.1 Hz), the global landmark-based model is incrementally built and the robot situation can be estimated from discriminant landmarks selected amongst the detected objects in the range data. In the tracking process (currently 4 Hz), selected landmarks are tracked in the visual data; the tracking results are used to simplify the matching between landmarks in the modeling process.Finally, a sensor-based visual navigation mode, based on the same landmark selection and tracking, is also presented; in order to navigate during a long robot motion, different landmarks (targets) can be selected as a sequence of sub-goals that the robot must successively reach.