Toward bridging the annotation-retrieval gap in image search by a generative modeling approach

  • Authors:
  • Ritendra Datta;Weina Ge;Jia Li;James Z. Wang

  • Affiliations:
  • Pennsylvania State University;Pennsylvania State University;Pennsylvania State University;Pennsylvania State University

  • Venue:
  • MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

While automatic image annotation remains an actively pursued research topic, enhancement of image search through its use has not been extensively explored. We propose an annotation-driven image retrieval approach and argue that under a number of different scenarios, this is very effective for semantically meaningful image search. In particular, our system is demonstrated to effectively handle cases of partially tagged and completely untagged image databases, multiple keyword queries, and example based queries with or without tags, all in near-realtime. Because our approach utilizes extra knowledge from a training dataset, it outperforms state-of-the-art visual similarity based retrieval techniques. For this purpose, a novel structure-composition model constructed from Beta distributions is developed to capture the spatial relationship among segmented regions of images. This model combined with the Gaussian mixture model produces scalable categorization of generic images. The categorization results are found to surpass previously reported results in speed and accuracy. Our novel annotation framework utilizes the categorization results to select tags based on term frequency, term saliency, and a WordNet-based measure of congruity, to boost salient tags while penalizing potentially unrelated ones. A bag of words distance measure based on WordNet is used to compute semantic similarity. The effectiveness of our approach is shown through extensive experiments.