Refining video annotation by exploiting pairwise concurrent relation

  • Authors:
  • Zheng-Jun Zha;Tao Mei;Xian-Sheng Hua;Guo-Jun Qi;Zengfu Wang

  • Affiliations:
  • University of Science and Technology of China, Hefei, China;Microsoft Research Asia, Beijing, China;Microsoft Research Asia, Beijing, China;University of Science and Technology of China, Hefei, China;University of Science and Technology of China, Hefei, China

  • Venue:
  • Proceedings of the 15th international conference on Multimedia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Video annotation is a promising and essential step for content-based video search and retrieval. Most of the state-of-the-art video annotation approaches detect multiple semantic concepts in an isolated manner, which neglect the fact that video concepts are usually correlated in semantic nature. In this paper, we propose to refine video annotation by leveraging the pairwise concurrent relation among video concepts. Such concurrent relation is explicitly modeled by a concurrent matrix and then a propagation strategy is adopted to refine the annotations. Through spreading the scores of all related concepts to each other iteratively, the detection results approach stable and optimal. In contrast with existing concept fusion methods, the proposed approach is computationally more efficient and easy to implement, not requiring to construct any contextual model. Furthermore, we show its intuitive connection with the PageRank algorithm. We conduct the experiments on TRECVID 2005 corpus and report superior performance compared to existing key approaches.