Leveraging auxiliary text terms for automatic image annotation

  • Authors:
  • Ning Zhou;Yi Shen;Jinye Peng;Xiaoyi Feng;Jianping Fan

  • Affiliations:
  • University of North Carolina at Charlotte, Charlotte, NC, USA;University of North Carolina at Charlotte, Charlotte, NC, USA;Northwestern Polytechnical University, Xi'an, China;Northwestern Polytechnical University, Xi'an, China;University of North Carolina at Charlotte, Charlotte, NC, USA

  • Venue:
  • Proceedings of the 20th international conference companion on World wide web
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a novel algorithm to annotate web images by automatically aligning the images with their most relevant auxiliary text terms. First, the DOM-based web page segmentation is performed to extract images and their most relevant auxiliary text blocks. Second, automatic image clustering is used to partition the web images into a set of groups according to their visual similarity contexts, which significantly reduces the uncertainty on the relatedness between the images and their auxiliary terms. The semantics of the visually-similar images in the same cluster are then described by the same ranked list of terms which frequently co-occur in their text blocks. Finally, a relevance re-ranking process is performed over a term correlation network to further refine the ranked term list. Our experiments on a large-scale database of web pages have provided very positive results.