Improving image annotations using wordnet

  • Authors:
  • Yohan Jin;Lei Wang;Latifur Khan

  • Affiliations:
  • Department of Computer Science, University of Texas at Dallas Richardson, Texas;Department of Computer Science, University of Texas at Dallas Richardson, Texas;Department of Computer Science, University of Texas at Dallas Richardson, Texas

  • Venue:
  • MIS'05 Proceedings of the 11th international conference on Advances in Multimedia Information Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The development of technology generates huge amounts of non-textual information, such as images. An efficient image annotation and retrieval system is highly desired. Clustering algorithms make it possible to represent visual features of images with finite symbols. Based on this, many statistical models, which analyze correspondence between visual features and words and discover hidden semantics, have been published. These models improve the annotation and retrieval of large image databases. However, current state of the art including our previous work produces too many irrelevant keywords for images during annotation. In this paper, we propose a novel approach that augments the classical model with generic knowledge-based, WordNet. Our novel approach strives to prune irrelevant keywords by the usage of WordNet. To identify irrelevant keywords, we investigate various semantic similarity measures between keywords and finally fuse outcomes of all these measures together to make a final decision. We have implemented various models to link visual tokens with keywords based on knowledge-based, WordNet and evaluated performance using precision, and recall using benchmark dataset. The results show that by augmenting knowledge-based with classical model we can improve annotation accuracy by removing irrelevant keywords.