Recognizing Action at a Distance
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Actions Sketch: A Novel Action Representation
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
International Journal of Computer Vision
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Behavior recognition via sparse spatio-temporal features
ICCCN '05 Proceedings of the 14th International Conference on Computer Communications and Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words
International Journal of Computer Vision
Searching for Complex Human Activities with No Visual Examples
International Journal of Computer Vision
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
An Efficient Dense and Scale-Invariant Spatio-Temporal Interest Point Detector
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
A survey on vision-based human action recognition
Image and Vision Computing
Recognizing human actions by attributes
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Action recognition by dense trajectories
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
A survey on visual surveillance of object motion and behaviors
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Detecting activities of daily living in first-person camera views
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Hi-index | 0.00 |
In this paper, we propose the fast dense trajectories algorithm for human action recognition. Dense trajectories are robust to fast irregular motions and outperform other state-of-the-art descriptors such as KLT tracker or SIFT descriptors. However, the use of dense trajectories is time consuming. To improve the efficiency, we extract feature trajectories in the ROI rather than in the whole frames, and we use the temporal pyramids to achieve adaptable mechanism for different action speed. We evaluate the method on the dataset of Huawei/3DLife -- 3D human reconstruction and action recognition Grand Challenge in ACM Multimedia 2013. Experimental results show a significant improvement over the dense trajectories descriptor in real-time, and adaptable to different speed.