Video Annotation Based on Kernel Linear Neighborhood Propagation

  • Authors:
  • Jinhui Tang;Xian-Sheng Hua;Guo-Jun Qi;Yan Song;Xiuqing Wu

  • Affiliations:
  • Dept. of Electron. Eng. & Inf. Sci., Univ. of Sci. & Technol. of China, Hefei;-;-;-;-

  • Venue:
  • IEEE Transactions on Multimedia
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The insufficiency of labeled training data for representing the distribution of the entire dataset is a major obstacle in automatic semantic annotation of large-scale video database. Semi-supervised learning algorithms, which attempt to learn from both labeled and unlabeled data, are promising to solve this problem. In this paper, a novel graph-based semi-supervised learning method named kernel linear neighborhood propagation (KLNP) is proposed and applied to video annotation. This approach combines the consistency assumption, which is the basic assumption in semi-supervised learning, and the local linear embedding (LLE) method in a nonlinear kernel-mapped space. KLNP improves a recently proposed method linear neighborhood propagation (LNP) by tackling the limitation of its local linear assumption on the distribution of semantics. Experiments conducted on the TRECVID data set demonstrate that this approach outperforms other popular graph-based semi-supervised learning methods for video semantic annotation.