Identifying relevant frames in weakly labeled videos for training concept detectors

  • Authors:
  • Adrian Ulges;Christian Schulze;Daniel Keysers;Thomas Breuel

  • Affiliations:
  • Technical University, Kaiserslautern, Germany;German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany;German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany;DFKI and Technical University - Kaiserslautern, Kaiserslautern, Germany

  • Venue:
  • CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A key problem with the automatic detection of semantic concepts (like 'interview' or 'soccer') in video streams is the manual acquisition of adequate training sets. Recently, we have proposed to use online videos downloaded from portals like youtube.com for this purpose, whereas tags provided by users during video upload serve as ground truth annotations. The problem with such training data is that it is weakly labeled: Annotations are only provided on video level, and many shots of a video may be "non-relevant", i.e. not visually related to a tag. In this paper, we present a probabilistic framework for learning from such weakly annotated training videos in the presence of irrelevant content. Thereby, the relevance of keyframes is modeled as a latent random variable that is estimated during training. In quantitative experiments on real-world online videos and TV news data, we demonstrate that the proposed model leads to a significantly increased robustness with respect to irrelevant content, and to a better generalization of the resulting concept detectors.