Automatic image annotation by an iterative approach: incorporating keyword correlations and region matching

  • Authors:
  • Xiangdong Zhou;Mei Wang;Qi Zhang;Junqi Zhang;Baile Shi

  • Affiliations:
  • Fudan University, Shanghai, China;Fudan University, Shanghai, China;University of North Carolina at Chapel Hill;Fudan University, Shanghai, China;Fudan University, Shanghai, China

  • Venue:
  • Proceedings of the 6th ACM international conference on Image and video retrieval
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic image annotation automatically labels image content with semantic keywords. For instance, the Relevance Model estimates the joint probability of the keyword and the image [3]. Most of the previous annotation methods assign keywords separately. Recently the correlation between annotated keywords has been used to improve image annotation. However, directly estimating the joint probability of a set of keywords and the unlabeled image is computationally prohibitive. To avoid the computation difficulty we propose a heuristic greedy iterative algorithm to estimate the probability of a keyword subset being the caption of an image. In our approach, the correlations between keywords are analyzed by "Automatic Local Analysis" of text information retrieval. In addition, a new image generation probability estimation method is proposed based on region matching. We demonstrate that our iterative annotation algorithm can incorporate the keyword correlations and the region matching approaches handily to improve the image annotation significantly. The experiments on the ECCV2002 [2] benchmark show that our method outperforms the state-of-the-art continuous feature model MBRM with recall and precision improving 21% and 11% respectively.