Refining video annotation by exploiting inter-shot context

  • Authors:
  • Jian Yi;Yuxin Peng;Jianguo Xiao

  • Affiliations:
  • Institute of Computer Science and Technology, Peking University, Beijing 100871, China, Beijing, China;Institute of Computer Science and Technology, Peking University, Beijing 100871, China, Beijing, China;Institute of Computer Science and Technology, Peking University, Beijing 100871, China, Beijing, China

  • Venue:
  • Proceedings of the international conference on Multimedia
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a new approach to refine video annotation by exploiting the inter-shot context. Our method is mainly novel in two ways. On one hand, to refine annotation result of the target concept, we model the sequence of shots in video as a conditional random field with chain structure. In this way, we can capture different kinds of concept relationships in inter-shot context to improve the annotation accuracy. On the other hand, to exploit inter-shot context for the target concept, we classify shots into different types according to their correlation to the target concept, which will be used to represent different kinds of concept relationships in inter-shot context. Experiments on the widely used TRECVID 2006 data set show that our method is effective for refining video annotation, achieving a significant performance improvement over several state of the art methods.