VideoMule: a consensus learning approach to multi-label classification from noisy user-generated videos

  • Authors:
  • Chandrasekar Ramachandran;Rahul Malik;Xin Jin;Jing Gao;Klara Nahrstedt;Jiawei Han

  • Affiliations:
  • University of Illinois at Urbana-Champaign, Urbana, IL, USA;University of Illinois at Urbana-Champaign, Urbana, IL, USA;University of Illinois at Urbana-Champaign, Urbana, IL, USA;University of Illinois at Urbana-Champaign, Urbana, IL, USA;University of Illinois at Urbana-Champaign, Urbana, IL, USA;University of Illinois at Urbana-Champaign, Urbana, IL, USA

  • Venue:
  • MM '09 Proceedings of the 17th ACM international conference on Multimedia
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the growing proliferation of conversational media and devices for generating multimedia content, the Internet has seen an expansion in websites catering to user-generated media. Most of the user-generated content is multimodal in nature as it has videos, audio, text (in the form of tags), comments and so on. Content analysis is a challenging problem on this type of media since it is noisy, unstructured and unreliable. In this paper we propose VideoMule, a consensus learning approach for multi-label video classification from noisy user-generated videos. In our scheme, we train classification and clustering algorithms on individual modes of information such as user comments, tags, video features and so on. We then combine the results of trained classifiers and clustering algorithms using a novel heuristic consensus learning algorithm which as a whole performs better than each individual learning model.