Automatic annotation and retrieval for videos

  • Authors:
  • Fangshi Wang;De Xu;Wei Lu;Hongli Xu

  • Affiliations:
  • School of Computer & Information Technology, Beijing Jiaotong University, Beijing, China;School of Computer & Information Technology, Beijing Jiaotong University, Beijing, China;School of Software, Beijing Jiaotong University, Beijing, China;School of Computer & Information Technology, Beijing Jiaotong University, Beijing, China

  • Venue:
  • PSIVT'06 Proceedings of the First Pacific Rim conference on Advances in Image and Video Technology
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Retrieving videos by key words requires semantic knowledge of the videos. However, manual video annotation is very costly and time consuming. Most works reported in literatures focus on annotating a video shot with either only one semantic concept or a fixed number of words. In this paper, we propose a new approach to automatically annotate a video shot with a varied number of semantic concepts and to retrieve videos based on text queries. First, a simple but efficient method is presented to automatically extract Semantic Candidate Set (SCS) for a video shot based on visual features. Second, a semantic network with n nodes is built by an Improved Dependency Analysis Based Method (IDABM) which reduce the time complexity of orienting the edges from O(n4) to O(n2). Third, the final annotation set (FAS) is obtained from SCS by Bayesian Inference. Finally, a new way is proposed to rank the retrieved key frames according to the probabilities obtained during Bayesian Inference. Experiments show that our method is useful in automatically annotating video shots and retrieving videos by key words.