A mutual semantic endorsement approach to image retrieval and context provision

  • Authors:
  • Jia Li

  • Affiliations:
  • Pennsylvania State University, University Park, PA

  • Venue:
  • Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning semantics from annotated images to enhance content-based retrieval is an important research direction. In this paper,annotation data are assumed available for only a subset of images inside the database. An on the fly learning method is developed to capture the semantics of query images. Specifically, the semantics of annotated images in a visual proximity of a query are compared with each other to determine the amount of mutual endorsement. An image is considered endorsed by another if they possess similar semantics. Annotations with high mutual endorsement are used to narrow down a candidate pool of images. The new retrieval method is inherently dynamic and treats seamlessly different forms of annotation data. Experiments show that semantic endorsement can increase precision by as much as 70%in average for a wide range of parameter settings. We also develop a context provision mechanism to reveal the relationship between a query and semantic clusters extracted from the database. Context helps users explore the content of a database and provides a platform for them to tailor searches by stressing different perspectives in the interpretation of a query.