Transductive multi-label learning for video concept detection

  • Authors:
  • Jingdong Wang;Yinghai Zhao;Xiuqing Wu;Xian-Sheng Hua

  • Affiliations:
  • Microsoft Research Asia, Beijing, China;University of Sci. & Tech. of China, Hefei, China;University of Sci. & Tech. of China, Hefei, China;Microsoft Research Asia, Beijing, China

  • Venue:
  • MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Transductive video concept detection is an effective way to handle the lack of sufficient labeled videos. However, another issue, the multi-label interdependence, is not essentially addressed in the existing transductive methods. Most solutions only applied the transductive single-label approach to detect each individual concept separately, but ignoring the concept relation, or simply imposed the smoothness assumption over the multiple labels for each video, without indeed exploring the interdependence between the concepts. On the other hand, the semi-supervised extension of supervised multi-label classifiers, such as correlative multi-label support vector machines, is usually intractable and hence impractical due to the quite expensive computational cost. In this paper, we propose an effective transductive multi-label classification approach, which simultaneously models the labeling consistency between the visually similar videos and the multi-label interdependence for each video in an integrated framework. We compare the performance between the proposed approach and several representative transductive single-label and supervised multi-label classification approaches for the video concept detection task over the widely-used TRECVID data set. The comparative results demonstrate the superiority of the proposed approach.