A visual annotation framework using common-sensical and linguistic relationships for semantic media retrieval

  • Authors:
  • Bageshree Shevade;Hari Sundaram

  • Affiliations:
  • Arizona State University;Arizona State University

  • Venue:
  • AMR'05 Proceedings of the Third international conference on Adaptive Multimedia Retrieval: user, context, and feedback
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a novel image annotation approach with an emphasis on – (a) common sense based semantic propagation, (b) visual annotation interfaces and (c) novel evaluation schemes. The annotation system is interactive, intuitive and real-time. We attempt to propagate semantics of the annotations, by using WordNet and ConceptNet, and low-level features extracted from the images. We introduce novel semantic dissimilarity measures, and propagation frameworks. We develop a novel visual annotation interface that allows a user to group images by creating visual concepts using direct manipulation metaphors without manual annotation. We also develop a new evaluation technique for annotation that is based on relationship between concepts based on commonsensical relationships. Our Experimental results on three different datasets, indicate that the annotation system performs very well. The semantic propagation results are good – we converge close to the semantics of the image by annotating a small number (~16.8%) of database images.