A MapReduce-based indoor visual localization system using affine invariant features

  • Authors:
  • Tien-Ruey Hsiang;Yu Fu;Ching-Wei Chen;Sheng-Luen Chung

  • Affiliations:
  • Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan;Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan;Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan;Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan

  • Venue:
  • Computers and Electrical Engineering
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a vision-based indoor localization service system that adopts affine scale invariant features (ASIFT) in MapReduce framework. Compared to prior vision-based localization methods that use scale invariant features or bag-of-words to match database images, the proposed system with ASIFT achieves better localization hit rate, especially when the query image has a large viewing angle difference to the most similar database image. The heavy computation imposed by ASIFT feature detection and image registration is handled by processes designed in MapReduce framework to speed up the localization service. Experiments using a Hadoop computation cluster provide results that show the performance of the localization system. The better localization hit rate is demonstrated by comparing the proposed approach to previous work based on scale invariant feature matching and visual vocabulary.