Combining visual attention model with multi-instance learning for tag ranking

  • Authors:
  • Songhe Feng;Hong Bao;Congyan Lang;De Xu

  • Affiliations:
  • Institute of Computer & Information Technology, Beijing Jiaotong University, Beijing 100044, China;Institute of Computer & Information Technology, Beijing Jiaotong University, Beijing 100044, China and Information College of Beijing Union University, Beijing 100101, China;Institute of Computer & Information Technology, Beijing Jiaotong University, Beijing 100044, China;Institute of Computer & Information Technology, Beijing Jiaotong University, Beijing 100044, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Tag ranking has emerged as an important research topic recently due to its potential application on web image search. Existing tag relevance ranking approaches mainly rank the tags according to their relevance levels with respect to a given image. Nonetheless, such algorithms heavily rely on the large-scale image dataset and the proper similarity measurement to retrieve semantic relevant images with multi-labels. In contrast to the existing tag relevance ranking algorithms, in this paper, we propose a novel tag saliency ranking scheme, which aims to automatically rank the tags associated with a given image according to their saliency to the image content. To this end, this paper presents an integrated framework for tag saliency ranking, which combines both visual attention model and multi-instance learning to investigate the saliency ranking order information of tags with respect to the given image. Specifically, tags annotated on the image-level are propagated to the region-level via an efficient multi-instance learning algorithm firstly; then, visual attention model is employed to measure the importance of regions in the given image. Finally, tags are ranked according to the saliency values of the corresponding regions. Experiments conducted on the COREL and MSRC image datasets demonstrate the effectiveness and efficiency of the proposed framework.