Leveraging large-scale weakly-tagged images to train inter-related classifiers for multi-label annotation

  • Authors:
  • Jianping Fan;Chunlei Yang;Yi Shen;Noboru Babaguchi;Hangzai Luo

  • Affiliations:
  • UNC-Charlotte, Charlotte, NC, USA;UNC-Charlotte, Charlotte, NC, USA;UNC-Charlotte, Charlotte, NC, USA;Osaka University, Osaka, Japan;East China Normal University, Shanghai, China

  • Venue:
  • LS-MMRM '09 Proceedings of the First ACM workshop on Large-scale multimedia retrieval and mining
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we have developed a new multi-label multi-task learning framework to leverage large-scale weakly-tagged images for inter-related classifier training. A novel image and tag cleansing algorithm is developed for tackling the issues of spam, synonymous, loose and ambiguous tags and obtain more relevant images. The visual concept network is generated to characterize the inter-concept visual similarity contexts precisely and determine the inter-related learning tasks automatically. Through a multi-label multi-task learning paradigm, our structured max-margin learning algorithm can leverage both large-scale weakly-tagged images and the visual concept network to learn large amounts of inter-related classifiers for supporting multi-label image annotation.