Cross-domain video concept detection: A joint discriminative and generative active learning approach

  • Authors:
  • Huan Li;Yuan Shi;Yang Liu;Alexander G. Hauptmann;Zhang Xiong

  • Affiliations:
  • Vancl Research Center, Beijing 100102, China;Department of Computer Science, University of Southern California, CA 90089, USA;Department of Statistics, Yale University, New Haven, CT 06511, USA;School of Computer Science, Carnegie, Mellon University, PA 15213, USA;School of Computer Science and Engineering, Beihang University, Beijing 100191, China

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2012

Quantified Score

Hi-index 12.05

Visualization

Abstract

In this work, we study the problem of cross-domain video concept detection, where the distributions of the source and target domains are different. Active learning can be used to iteratively refine a source domain classifier by querying labels for a few samples in the target domain, which could reduce the labeling effort. However, traditional active learning method which often uses a discriminative query strategy that queries the most ambiguous samples to the source domain classifier for labeling would fail, when the distribution difference between two domains is too large. In this paper, we tackle this problem by proposing a joint active learning approach which combines a novel generative query strategy and the existing discriminative one. The approach adaptively fits the distribution difference and shows higher robustness than the ones using single strategy. Experimental results on two synthetic datasets and the TRECVID video concept detection task highlight the effectiveness of our joint active learning approach.