Automatic Concept Detector Refinement for Large-Scale Video Semantic Annotation

  • Authors:
  • Xueliang Liu;Benoit Huet

  • Affiliations:
  • -;-

  • Venue:
  • ICSC '10 Proceedings of the 2010 IEEE Fourth International Conference on Semantic Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the explosion of content sharing web site, an unprecedented amount of multimedia items are made available online on a day to day basis. Since search engine technologies rely essentially on textual information there is an urgent need to infer relevant semantic description through content based analysis on those multimedia documents. In this paper, we propose an approach which leverages the sheer volume of data available online to refine semantic concept detectors for videos annotation without requiring any additional human interaction. To address the problem in a realistic setting, we have collected a large video collection of about 42 thousand videos crawled from YouTube. A number of low-level features are extracted from those videos and are included within the corpus. Upon training on a small initial set of labeled video shots, the concept detectors are run on the large scale unlabeled corpus in order to identify and select new training samples. Thanks to this inexpensively obtained set of new training examples the concept detectors can be reinforced and enhanced based on a wider number of unlabeled samples and therefore better adapt to the corpus at hand. The experimental results reported here show that indeed the annotation accuracy improves when the training set is extended with automatically labeled samples.