Beyond tag relevance: integrating visual attention model and multi-instance learning for tag saliency ranking

  • Authors:
  • Songhe Feng;Congyan Lang;De Xu

  • Affiliations:
  • Beijing Jiaotong University, Beijing, China;Beijing Jiaotong University, Beijing, China;Beijing Jiaotong University, Beijing, China

  • Venue:
  • Proceedings of the ACM International Conference on Image and Video Retrieval
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tag ranking has emerged as an important research topic recently due to its potential application on web image search. Conventional tag ranking approaches mainly rank the tags according to their relevance levels with respect to a given image. Nonetheless, such algorithms heavily rely on the large-scale image dataset and the proper similarity measurement to retrieve semantic relevant images with multi-labels. In contrast to the existing tag relevance ranking algorithms, in this paper, we propose a novel tag saliency ranking scheme, which aims to automatically rank the tags associated with a given image according to their saliency to the image content. To this end, this paper presents an integrated framework for tag saliency ranking which combines both visual attention model and multi-instance learning algorithm to investigate the saliency ranking order information of tags with respect to the given image. Specifically, tags annotated on the image-level are propagated to the region-level via an efficient multi-instance learning algorithm firstly; then, visual attention model is employed to measure the importance of regions in the given image. And finally, tags are ranked according to the saliency values of the corresponding regions. Experiments conducted on the COREL and MSRC image datasets demonstrate the effectiveness and efficiency of the proposed framework.