Going beyond the surrounding text to semantically annotate and search digital images

  • Authors:
  • Shahrul Azman Noah;Datul Aida Ali;Arifah Che Alhadi;Junaidah Mohamad Kassim

  • Affiliations:
  • Faculty of Information Science & Technology, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia;Faculty of Information Science & Technology, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia;Department of Computer Science, Universiti Malaysia Terengganu, Kuala Terengganu, Terengganu, Malaysia;Faculty of Information Science & Technology, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia

  • Venue:
  • ACIIDS'10 Proceedings of the Second international conference on Intelligent information and database systems: Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Digital objects such as images and videos are fundamental resources in digital library. To assist in retrieving such objects usually they are being tagged by some keywords or sentences. The popular approach to tag digital objects is based on associated text. However, relying on associated text alone such as the surrounding text unable to semantically describe such objects. This paper discusses the use of WordNet and ConceptNet to tag digital images beyond terms available in the surrounding text. WordNet is used to disambiguate concepts or terms from the associated text and ConceptNet is meant to infer topics or common-sense knowledge from summarizing the text that describe the images. However, relying on WordNet alone is not sufficed particularly when it comes to disambiguate specific or domain dependent concepts. As such the Name Entity Recognition (NER) technique is required to annotate important entities such as name of a person, location and organization. Our work focused on on-lines news images that are richly described with textual description.