Vidya: an experiential annotation system

  • Authors:
  • Bageshree Shevade;Hari Sundaram

  • Affiliations:
  • Arizona State University, AZ;Arizona State University, AZ

  • Venue:
  • ETP '03 Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a novel annotation paradigm with an emphasis on two facets -- (a) the end user experience and (b) semantic propagation. The annotation problem is important since media semantics play a key role in new multimedia applications. However, there is currently very little incentive for end users to annotate.The annotation system, is interactive and experiential. We attempt to propagate semantics of the annotations, by using WordNet, a lexicographic arrangement of words, and low-level features extracted from the images. We introduce novel semantic dissimilarity measures, and propagation frameworks. The system provides insight to the user, by providing her with knowledge sources, that are constrained by the user and media context. The knowledge sources are presented using context-aware hyper-mediation.Our Experimental results indicates that the systems performs well. We tested the new annotation experience using a pilot user study, the users agreed that the new framework was more useful that a traditional annotation interface. The semantic propagation results are good as well -- we converge close to the semantics of the image by annotating a small number (~15%) of database images.