Mining concept relationship in temporal context for effective video annotation

  • Authors:
  • Jian Yi;Yuxin Peng;Jianguo Xiao

  • Affiliations:
  • Institute of Computer Science and Technology, Peking University, Beijing 100871, China, Beijing, China;Institute of Computer Science and Technology, Peking University, Beijing 100871, China, Beijing, China;Institute of Computer Science and Technology, Peking University, Beijing 100871, China, Beijing, China

  • Venue:
  • MM '11 Proceedings of the 19th ACM international conference on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a new method to boost the performance of video annotation by exploiting concept relationship in temporal context. The motivation of our idea mainly comes from the fact that temporally continuous shots in video are generally with consistent content, so that concepts in these shots should be semantically relevant. We utilize a temporal model to describe the contributions of relevant concepts to the presence of a target concept. By connecting our model with conditional random field and adopting the learning and inference approaches of it, we could obtain the refined probability of a concept occurring in the shot, which is the leverage of temporal context and initial output of video annotation. Experimental results on the widely used TRECVID dataset exhibit the effectiveness of our method for improving video annotation accuracy.