Efficient spatiotemporal-attention-driven shot matching

  • Authors:
  • Shan Li;Moon-Chuen Lee

  • Affiliations:
  • The Chinese University of Hong Kong, Shatin, N.T., HI, Hong Kong;The Chinese University of Hong Kong, Shatin, N.T., Hong Kong

  • Venue:
  • Proceedings of the 15th international conference on Multimedia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

As human attention is an effective mechanism for information prioritizing and selecting, it provides a practical approach for intelligent shot similarity matching. In this paper, we propose an attention-driven video interpretation method using an efficient spatiotemporal attention detection framework. The motion attention detection in most existing methods is unstable and computationally expensive. Avoiding calculating motion explicitly, the proposed framework generates motion saliency using the rank deficiency of grayscale gradient tensors. To address an ill-posed weight determination problem, an adaptive fusion method is proposed for motion and spatial saliency integration by highlighting the more reliable saliency maps. An attention-drive matching strategy is proposed by converting attention values to importance factors, which subsequently boost the attended regions in region-based shot matching. A global feature-based matching strategy is also included the attention-driven strategy, to address cases where visual attention detection is less applicable. Experiment results demonstrate the advantages of the proposed method in similarity matching.