Deriving semantic terms for images by mining the web

  • Authors:
  • Zhiguo Gong;Qian Liu;Jingzhi Guo

  • Affiliations:
  • University of Macau, Macao, P.R. China;University of Macau, Macao, P.R. China;University of Macau, Macao, P.R. China

  • Venue:
  • Proceedings of the 11th International Conference on Electronic Commerce
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we provide a novel image annotation model by mining the Web. In our approach, the concepts or words appearing in the associated text are extracted and filtered as the semantic annotations for the corresponding Web images. In order to alleviate the influence caused by the noise images, for each semantic concept, we improve Web image-word relationships using Mixture Gaussian Distribution Model. By doing so, the concepts or words relevant to any image are re-weighed by both considering their relevance to the image in term of text and in term of visual feature. In fact, all the words associated to an image are not semantically independent. We use co-occurrences between two words to describe their semantic relevance. Thus, we further use a method, called Word Promotion, to co-enhance the weights of all the words associated to a given image based on their co-occurrences. Our experiments are conducted in several ways and the results show that our annotation method can achieve a satisfactory performance.