A novel method for semantic video concept learning using web images

  • Authors:
  • Yongqing Sun;Akira Kojima

  • Affiliations:
  • NTT Cyber Solutions Laboratories, NTT Corporation, Yokosuka-shi, Kanagawa-ken, Japan;NTT Cyber Solutions Laboratories, NTT Corporation, Yokosuka-shi, Kanagawa-ken, Japan

  • Venue:
  • MM '11 Proceedings of the 19th ACM international conference on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years, exploring the rich web image resources has been offering promising solutions to the problem of how to perform low-manual-cost concept learning. However, concept classifiers trained using web images perform poorly when they are directly applied to video concept detection. We propose a novel scheme to address video concept learning using web images, one that includes the selection of web training data and the transfer of subspace learning within a unified framework. Starting with a small set of video keyframes related to a video concept, we select web training data of good quality from the web by referring to the content of video keyframes. Then, by exploiting both the selected dataset and video keyframes, we train a robust concept classifier by means of a transfer subspace learning method. Experiment results demonstrate the robustness and effectiveness of our method.