Vision-based navigation with efficient scene recognition

  • Authors:
  • Jungho Kim;Chaehoon Park;In So Kweon

  • Affiliations:
  • Department of Electrical Engineering, KAIST, Daejeon, Republic of Korea;Department of Electrical Engineering, KAIST, Daejeon, Republic of Korea;Department of Electrical Engineering, KAIST, Daejeon, Republic of Korea

  • Venue:
  • Intelligent Service Robotics
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose an efficient feature matching method for scene recognition and global localization. The proposed method enables mobile robots to autonomously navigate through the dynamic environment where the robot frequently encounters visual occlusion and kidnapping. For this purpose, we present a scale optimization method to enhance the matching performance with the combination of the FAST detector and integral image-based SIFT descriptors that are computationally efficient. The scale optimization method is required because the FAST detector does not provide scale information to compute descriptors for matching. We evaluate the performance of feature matching using various indoor image sequences and demonstrate the robustness of our navigation system under various conditions.