An efficient parallel strategy for matching visual self-similarities in large image databases

  • Authors:
  • Katharina Schwarz;Tobias Häußler;Hendrik P. A. Lensch

  • Affiliations:
  • Computer Graphics, Tübingen University, Tübingen, Germany;Computer Graphics, Tübingen University, Tübingen, Germany;Computer Graphics, Tübingen University, Tübingen, Germany

  • Venue:
  • ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Due to high interest of social online systems, there exists a huge and still increasing amount of image data in the web. In order to handle this massive amount of visual information, algorithms often need to be redesigned. In this work, we developed an efficient approach to find visual similarities between images that runs completely on GPU and is applicable to large image databases. Based on local self-similarity descriptors, the approach finds similarities even across modalities. Given a set of images, a database is created by storing all descriptors in an arrangement suitable for parallel GPU-based comparison. A novel voting-scheme further considers the spatial layout of descriptors with hardly any overhead. Thousands of images are searched in only a few seconds. We apply our algorithm to cluster a set of image responses to identify various senses of ambiguous words and re-tag similar images with missing tags.