A Principled Approach to Detecting Surprising Events in Video
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Visual attention detection in video sequences using spatiotemporal cues
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
A Visual Attention Based Region-of-Interest Determination Framework for Video Sequences*
IEICE - Transactions on Information and Systems
Scene-adaptive transform domain video partitioning
IEEE Transactions on Multimedia
Fast similarity search and clustering of video sequences on the world-wide-web
IEEE Transactions on Multimedia
A generic framework of user attention model and its application in video summarization
IEEE Transactions on Multimedia
Foveated shot detection for video segmentation
IEEE Transactions on Circuits and Systems for Video Technology
Unsupervised extraction of visual attention objects in color images
IEEE Transactions on Circuits and Systems for Video Technology
Clip-based similarity measure for query-dependent clip retrieval and video summarization
IEEE Transactions on Circuits and Systems for Video Technology
Fast coarse-to-fine video retrieval using shot-level spatio-temporal statistics
IEEE Transactions on Circuits and Systems for Video Technology
Effective Detection of Various Wipe Transitions
IEEE Transactions on Circuits and Systems for Video Technology
Attention-driven action retrieval with DTW-based 3d descriptor matching
MM '08 Proceedings of the 16th ACM international conference on Multimedia
A dataset and evaluation methodology for visual saliency in video
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Hi-index | 0.00 |
As human attention is an effective mechanism for information prioritizing and selecting, it provides a practical approach for intelligent shot similarity matching. In this paper, we propose an attention-driven video interpretation method using an efficient spatiotemporal attention detection framework. The motion attention detection in most existing methods is unstable and computationally expensive. Avoiding calculating motion explicitly, the proposed framework generates motion saliency using the rank deficiency of grayscale gradient tensors. To address an ill-posed weight determination problem, an adaptive fusion method is proposed for motion and spatial saliency integration by highlighting the more reliable saliency maps. An attention-drive matching strategy is proposed by converting attention values to importance factors, which subsequently boost the attended regions in region-based shot matching. A global feature-based matching strategy is also included the attention-driven strategy, to address cases where visual attention detection is less applicable. Experiment results demonstrate the advantages of the proposed method in similarity matching.