2-DOM: A 2-Dimensional Object Model towards Web Image Annotation

  • Authors:
  • Sadet Alcic;Stefan Conrad

  • Affiliations:
  • -;-

  • Venue:
  • SMAP '08 Proceedings of the 2008 Third International Workshop on Semantic Media Adaptation and Personalization
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The automatic annotation of images is still a non-reliable processdue to the well-known semantic gap dominating between the physical representation of images and their high level semantics. To avoid the confrontation with the semantic gap several approaches restrict the image dataset to web images. Web images mostly appear on websites with other text contents which can deliver important information about the image semantics. Popular image search engines use text contents surrounding the image to generate annotation keywords. Also emphasized text contents like headlines are assumed to be important description providers. Otherwise we discover false positive results in high ranking positions of this search engines which are the effect of incorrect text-to-image mappings. This paper addresses the problem of finding correct matches between text elements and images in HTML-documents by extending the DOMmodel tree of a webpage to a 2-Dimensional Object Model (2-DOM) tree. Thismodel adapts to the two dimensional manner of web documents and thus allows a better mapping of text articles to images. The evaluation results show that with a precision over 90 percent text articles are well assigned to web images. This leads intuitive to better textual information about the presented events on the images and thus yields to a better image retrieval quality in querying-by-keyword scenarios.