Allocating images and selecting image collections for distributed visual search

  • Authors:
  • Bing Li;Ling-Yu Duan;Jie Lin;Tiejun Huang

  • Affiliations:
  • The Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;Peking University, Beijing, China;Beijing Jiaotong University, Beijing, China;Peking University, Beijing, China

  • Venue:
  • Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

To improve query throughput, distributed image retrieval has been widely used to address the large scale visual search. In textual retrieval, the state-of-the-art approaches attempt to partition a textual database into multiple collections offline and allocate each collection to a server node. For each incoming query, just a few relevant collections are selected to search without seriously sacrificing retrieval accuracy, which enables sever nodes to process multiple queries concurrently. Unlike text retrieval, distributed visual search poses challenges in optimally allocating images and selecting image collections, due to the lack of semantic meanings in Bag of Words (BoW) based representation. In this paper, we propose a novel Semantics Related Distributed Visual Search (SRDVS) model. We employ Latent Dirichlet Allocation (LDA) [2] to discover the latent concepts as an intermediate semantic representation over a large scale image database. We aim to learn an optimal image allocation for each server node and accurately perform collection selection for each query. Experimental results over a million scale image database have demonstrated encouraging performance over state-of-the-art approaches. On average 6% collections are selected, which however yields promising retrieval performance comparable to the exhaustive search over the whole database.