Learning spatially semantic representations for cognitive robot navigation

  • Authors:
  • Ioannis Kostavelis;Antonios Gasteratos

  • Affiliations:
  • -;-

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Contemporary mobile robots should exhibit enhanced capacities, which allow them self-localization and semantic interpretation as they move into an unexplored environment. The coexistence of accurate SLAM and place recognition can provide a descriptive and adaptable navigation model. In this paper such a two-layer navigation scheme is introduced suitable for indoor environments. The low layer comprises a 3D SLAM system based solely on an RGB-D sensor, whilst the high one employs a novel content-based representation algorithm, suitable for spatial abstraction. In course of robot's locomotion, salient visual features are detected and they shape a bag-of-features problem, quantized by a Neural Gas to code the spatial information for each scene. The learning procedure is performed by an SVM classifier able to accurately recognize multiple dissimilar places. The two layers mutually interact with a semantically annotated topological graph augmenting the cognition attributes of the integrated system. The proposed framework is assessed on several datasets, exhibiting remarkable accuracy. Moreover, the appearance based algorithm produces semantic inferences suitable for labeling unexplored environments.