Towards optimizing human labeling for interactive image tagging

  • Authors:
  • Jinhui Tang;Qiang Chen;Meng Wang;Shuicheng Yan;Tat-Seng Chua;Ramesh Jain

  • Affiliations:
  • Nanjing University of Science and Technology, Nanjing, China;National University of Singapore, Singapore;National University of Singapore, Singapore;National University of Singapore, Singapore;National University of Singapore, Singapore;University of California, Irvine, CA

  • Venue:
  • ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Interactive tagging is an approach that combines human and computer to assign descriptive keywords to image contents in a semi-automatic way. It can avoid the problems in automatic tagging and pure manual tagging by achieving a compromise between tagging performance and manual cost. However, conventional research efforts on interactive tagging mainly focus on sample selection and models for tag prediction. In this work, we investigate interactive tagging from a different aspect. We introduce an interactive image tagging framework that can more fully make use of human's labeling efforts. That means, it can achieve a specified tagging performance by taking less manual labeling effort or achieve better tagging performance with a specified labeling cost. In the framework, hashing is used to enable a quick clustering of image regions and a dynamic multiscale clustering labeling strategy is proposed such that users can label a large group of similar regions each time. We also employ a tag refinement method such that several inappropriate tags can be automatically corrected. Experiments on a large dataset demonstrate the effectiveness of our approach