A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
WARP: Accurate Retrieval of Shapes Using Phase of Fourier Descriptors and Time Warping Distance
IEEE Transactions on Pattern Analysis and Machine Intelligence
A unified shot boundary detection framework based on graph partition model
Proceedings of the 13th annual ACM international conference on Multimedia
Scalable Recognition with a Vocabulary Tree
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Visual attention detection in video sequences using spatiotemporal cues
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
A Visual Attention Based Region-of-Interest Determination Framework for Video Sequences*
IEICE - Transactions on Information and Systems
Efficient spatiotemporal-attention-driven shot matching
Proceedings of the 15th international conference on Multimedia
A 3-dimensional sift descriptor and its application to action recognition
Proceedings of the 15th international conference on Multimedia
Localizing and recognizing action unit using position information of local feature
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
VisualCor system: search actor correlations in TV series
Proceedings of the First International Conference on Internet Multimedia Computing and Service
Human action recognition employing negative space features
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
From visual perception viewpoint, actions in videos can capture high-level semantics for video content understanding and retrieval. However, action-level video retrieval meets great challenges, due to the interferences from global motions or concurrent actions, and the difficulties in robust action describing and matching. This paper presents a content-based action retrieval framework to enable effective search of near-duplicated actions in large-scale video database. Firstly, we present an attention shift model to distill and partition human-concerned saliency actions from global motions and concurrent actions. Secondly, to characterize each saliency action, we extract 3D-SIFT descriptor within its spatial-temporal region, which is robust against rotation, scale, and view point variances. Finally, action similarity is measured using Dynamic Time Warping (DTW) distance to offer tolerance for action duration variance and partial motion missing. Search efficiency in large-scale dataset is achieved by hierarchical descriptor indexing and approximate nearest-neighbor search. In validation, we present a prototype system VILAR to facilitate action search within "Friends" soap operas with excellent accuracy, efficiency, and human perception revealing ability.