The Recognition of Human Movement Using Temporal Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Video Google: A Text Retrieval Approach to Object Matching in Videos
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
International Journal of Computer Vision
International Journal of Computer Vision
A 3-dimensional sift descriptor and its application to action recognition
Proceedings of the 15th international conference on Multimedia
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words
International Journal of Computer Vision
An Efficient Dense and Scale-Invariant Spatio-Temporal Interest Point Detector
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
Dense spatio-temporal features for non-parametric anomaly detection and localization
Proceedings of the first ACM international workshop on Analysis and retrieval of tracked events and motion in imagery streams
Event detection and recognition for semantic annotation of video
Multimedia Tools and Applications
Space-time Zernike moments and pyramid kernel descriptors for action classification
ICIAP'11 Proceedings of the 16th international conference on Image analysis and processing - Volume Part II
Multi-scale and real-time non-parametric approach for anomaly detection and localization
Computer Vision and Image Understanding
Motion recognition using local auto-correlation of space-time gradients
Pattern Recognition Letters
A survey of video datasets for human action and activity recognition
Computer Vision and Image Understanding
Hi-index | 0.00 |
In this paper we propose a new method for human action categorization by using an effective combination of a new 3D gradient descriptor with an optic flow descriptor, to represent spatio-temporal interest points. These points are used to represent video sequences using a bag of spatio-temporal visual words, following the successful results achieved in object and scene classification. We extensively test our approach on the standard KTH and Weizmann actions datasets, showing its validity and good performance. Experimental results outperform state-of-the-art methods, without requiring fine parameter tuning.