Adaptive Model for Integrating Different Types of Associated Texts for Automated Annotation of Web Images

  • Authors:
  • Hongtao Xu;Xiangdong Zhou;Lan Lin;Mei Wang;Tat-Seng Chua

  • Affiliations:
  • School of Computer Science, Fudan University, Shanghai, China;School of Computer Science, Fudan University, Shanghai, China and National University of Singapore, Singapore;Tongji University, Shanghai, China;National University of Singapore, Singapore;National University of Singapore, Singapore

  • Venue:
  • MMM '09 Proceedings of the 15th International Multimedia Modeling Conference on Advances in Multimedia Modeling
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A lot of texts are associated with Web images, such as image file name, ALT texts, surrounding texts etc on the corresponding Web pages. It is well known that the semantics of Web images are well correlated with these associated texts, and thus they can be used to infer the semantics of Web images. However, different types of associated texts may play different roles in deriving the semantics of Web contents. Most previous work either regard the associated texts as a whole, or assign fixed weights to different types of associated texts according to some prior knowledge or heuristics. In this paper, we propose a novel linear basic expansion-based approach to automatically annotate Web images based on their associated texts. In particular, we adaptively model the semantic contributions of different types of associated texts by using a piecewise penalty weighted regression model. We also demonstrate that we can leverage the social tagging data of Web images, such as the Flickr's Related Tags, to enhance the performance of Web image annotation. Experiments conducted on a real Web image data set demonstrate that our approach can significantly improve the performance of Web image annotation.