Correlative linear neighborhood propagation for video annotation

  • Authors:
  • Jinhui Tang;Xian-Sheng Hua;Meng Wang;Zhiwei Gu;Guo-Jun Qi;Xiuqing Wu

  • Affiliations:
  • School of Computing, National University of Singapore, Singapore and Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China;Microsoft Research Asia, Beijing, China;Microsoft Research Asia, Beijing, China and Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China;Search Technology Center, Microsoft Research Asia, Beijing, China and Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China;Department of Automation, University of Science and Technology of China, Hefei, China;Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, graph-based semisupervised learning methods have been widely applied in multimedia research area. However, for the application of video semantic annotation in multilabel setting, these methods neglect an important characteristic of video data: The semantic concepts appear correlatively and interact naturally with each other rather than exist in isolation. In this paper, we adapt this semantic correlation into graph-based semisupervised learning and propose a novel method named correlative linear neighborhood propagation to improve annotation performance. Experiments conducted on the Text REtrieval Conference VIDeo retrieval evaluation data set have demonstrated its effectiveness and efficiency.