Improving News Video Annotation with Semantic Context

  • Authors:
  • Yu Qiu;Genliang Guan; ZhiyongWang;Dagan Feng

  • Affiliations:
  • -;-;-;-

  • Venue:
  • DICTA '10 Proceedings of the 2010 International Conference on Digital Image Computing: Techniques and Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic video annotation has been proposed to bridge the semantic gap introduced by content based retrieval so as to facilitate concept based video retrieval. Recently, utilizing context information has emerged as an important direction in automatic visual information annotation. %Most of existing approaches aim to assign linguistic terms to visual information without considering its given context. In this paper, we present a novel video annotation approach by utilizing the semantic context extracted from video subtitles. The semantic context of a video shot, which is formed by a set of key terms identified from subtitles of the video, is utilized to refine the initial annotation results by exploiting the semantic similarity between those key terms and the candidate annotation concepts. Similarity measurements including Google distance and WordNet distance have been investigated for such a refinement purpose. In addition, visualness is utilized to further discriminate individual terms for better refinement granularity. Extensive experiments on TRECVID 2005 dataset have been conducted to demonstrate significant improvement of the proposed annotation approach and to investigate the impact of various factors.