Image retrieval using query by contextual example

  • Authors:
  • Nikhil Rasiwasia;Nuno Vasconcelos

  • Affiliations:
  • University of California at San Diego, San Diego, USA;University of California at San Diego, San Diego, USA

  • Venue:
  • MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current image retrieval techniques have difficulties to retrieve images which exhibit distinct visual patterns but belong to the class of the query image. Previous attempts to improve generalization have shown that the introduction of semantic representations can mitigate this problem. We extend the existing query-by-semantic example (QBSE) retrieval paradigm by adding a second layer of semantic representation. At the first level, the representation is driven by patch-based visual features. Semantic concepts, from a predefined vocabulary, are modeled as Gaussian mixtures on a visual feature space, and images as vectors of posterior probabilities of containing each of the semantic concepts. At the second level, the representation is purely semantic. Semantic concepts are modeled as Dirichlet mixtures on the semantic feature space of QBSE, and images are again represented as vectors of posterior concept probabilities. It is shown that the proposed retrieval strategy, referred to as query-by-contextual-example (QBCE), is able to cope with the ambiguities of patch-based classification, exhibiting significantly better generalization than previous methods. An experimental evaluation on benchmark datasets shows that QBCE retrieval systems can substantially outperform their QBVE and QBSE counterparts, achieving high precision at most levels of recall.