See-to-retrieve: efficient processing of spatio-visual keyword queries

  • Authors:
  • Chao Zhang;Lidan Shou;Ke Chen;Gang Chen

  • Affiliations:
  • Zhejiang University, Hangzhou, China;Zhejiang University, Hangzhou, China;Zhejiang University, Hangzhou, China;Zhejiang University, Hangzhou, China

  • Venue:
  • SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The wide proliferation of powerful smart phones equipped with multiple sensors, 3D graphical engine, and 3G connection has nurtured the creation of a new spectrum of visual mobile applications. These applications require novel data retrieval techniques which we call What-You-Retrieve-Is-What-You-See (WYRIWYS). However, state-of-the-art spatial retrieval methods are mostly distance-based and thus inapplicable for supporting WYRIWYS. Motivated by this problem, we propose a novel query called spatio-visual keyword (SVK) query, to support retrieving spatial Web objects that are both visually conspicuous and semantically relevant to the user. To capture the visual features of spatial Web objects with extents, we introduce a novel visibility metric which computes object visibility in a cumulative manner. We propose an incremental method called Complete Occlusion-map based Retrieval (COR) to answer SVK queries. This method exploits effective heuristics to prune the search space and construct a data structure called Occlusion-Map. Then the method adopts the best-first strategy to return relevant objects incrementally. Extensive experiments on real and synthetic data sets suggest that our method is effective and efficient when processing SVK queries.