Temporal-Spatial refinements for video concept fusion

  • Authors:
  • Jie Geng;Zhenjiang Miao;Hai Chi

  • Affiliations:
  • Institute of Information Science, Beijing Jiaotong University, Beijing, China;Institute of Information Science, Beijing Jiaotong University, Beijing, China;Jilin Electric Power Maintenance Company, China

  • Venue:
  • ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part III
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The context-based concept fusion (CBCF) is increasingly used in video semantic indexing, which uses various relations among different concepts to refine the original detection results. In this paper, we present a CBCF method called Temporal-Spatial Node Balance algorithm (TSNB). This method is based on a physical model, in which the concepts are regard as nodes and the relations are regard as forces. Then all the spatial and temporal relations and the moving cost of the nodes will be balanced. This method is intuitive and observable to explain a concept how to influence others or be influenced by others. And it uses both the spatial and temporal information to describe the semantic structure of the video. We use TSNB algorithm on the datasets of TRECVid 2005-2010. The results show that this method outperforms all the existed works as we know. Besides, it is faster.