Retrieving lightly annotated images using image similarities

  • Authors:
  • Masashi Inoue;Naonori Ueda

  • Affiliations:
  • National Institute of Informatics, Chiyoda-ku Tokyo, Japan;NTT Communication Science Laboratories, Seika-cho, Soraku-gun Kyoto, Japan

  • Venue:
  • Proceedings of the 2005 ACM symposium on Applied computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Users' search needs are often represented by words and images are retrieved according to such textual queries. Annotation words assigned to the stored images are most useful to connect queries to the images. However, due to annotation cost, quite limited amount of annotation words are available in many cases. When annotations are not given at all, there needs to be some techniques that assign annotations automatically. When only a few annotation words are given to each image (lightly annotated), there need to be some enhancement techniques that best use the available annotations. We address the later problem by estimating word associations to fill in the lexical gap between queries and annotations. The model of word associations can be learned from the data. However, since images are only lightly annotated, their sparseness in computing word associations becomes crucial. To compensate the sparseness, we propose a novel data exploration technique in which image similarities contribute to the estimation of word associations on the assumption that similar images have similar semantic concepts. We experimentally show the potential benefit of our approach.