Multi-view learning from imperfect tagging

  • Authors:
  • Zhongang Qi;Ming Yang;Zhongfei (Mark) Zhang;Zhengyou Zhang

  • Affiliations:
  • Zhejiang University, Hangzhou, China;Zhejiang University, Hangzhou, China;Zhejiang University, Hangzhou, China;Microsoft Research, Redmond, WA, USA

  • Venue:
  • Proceedings of the 20th ACM international conference on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In many real-world applications, tagging is imperfect: incomplete, inconsistent, and error-prone. Solutions to this problem will generate societal and technical impacts. In this paper, we investigate this arguably new problem: learning from imperfect tagging. We propose a general and effective learning scheme called the Multi-view Imperfect Tagging Learning (MITL) to this problem. The main idea of MITL lies in extracting the information of the imperfectly tagged training dataset from multiple views to differentiate the data points in the role of classification. Further, a novel discriminative classification method is proposed under the framework of MITL, which explicitly makes use of the given multiple labels simultaneously as an additional feature to deliver a more effective classification performance than the existing literature where one label is considered at a time as the classification target while the rest of the given labels are completely ignored at the same time. The proposed methods can not only complete the incomplete tagging but also denoise the noisy tagging through an inductive learning. We apply the general solution to the problem with a more specific context - imperfect image annotation, and evaluate the proposed methods on a standard dataset from the related literature. Experiments show that they are superior to the peer methods on solving the problem of learning from imperfect tagging in cross-media.