Kernel-based linear neighborhood propagation for semantic video annotation

  • Authors:
  • Jinhui Tang;Xian-Sheng Hua;Yan Song;Guo-Jun Qi;Xiuqing Wu

  • Affiliations:
  • Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China;Microsoft Research Asia, Beijing, China;Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China;Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China;Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China

  • Venue:
  • PAKDD'07 Proceedings of the 11th Pacific-Asia conference on Advances in knowledge discovery and data mining
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The insufficiency of labeled training samples for representing the distribution of the entire data set (include labeled and unlabeled) is a major obstacle in automatic semantic annotation of large-scale video database. Semi-supervised learning algorithms, which attempt to learn from both labeled and unlabeled data, are promising to solve this problem. In this paper, we present a novel semi-supervised approach named Kernel based Local Neighborhood Propagation (Kernel LNP) for video annotation. This approach combines the consistency assumption and the Local Linear Embedding (LLE) method in a nonlinear kernel-mapped space, which improves a recently proposed method Local Neighborhood Propagation (LNP) by tackling the limitation of its local linear assumption on the distribution of semantics. Experiments conducted on the TRECVID data set demonstrate that this approach can obtain a more accurate result than LNP for video semantic annotation.