Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognizing Human Actions: A Local SVM Approach
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Behavior recognition via sparse spatio-temporal features
ICCCN '05 Proceedings of the 14th International Conference on Computer Communications and Networks
Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words
International Journal of Computer Vision
Active Exploration Using Bayesian Models for Multimodal Perception
ICIAR '08 Proceedings of the 5th international conference on Image Analysis and Recognition
Spatio-temporal shape contexts for human action retrieval
IMCE '09 Proceedings of the 1st international workshop on Interactive multimedia for consumer electronics
Feature detector and descriptor evaluation in human action recognition
Proceedings of the ACM International Conference on Image and Video Retrieval
Object, scene and actions: combining multiple features for human action recognition
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Action Recognition by Multiple Features and Hyper-Sphere Multi-class SVM
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Action recognition using context and appearance distribution features
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Hi-index | 0.01 |
In this paper, we propose to integrate structural information with appearance features for human action recognition. In local representations based on detected spatio-temporal interest points (STIPs), the layout of STIPs carries important cues of motion structures in video sequences, and is assumed to contain complementary information to appearance features. We aim to incorporate structures into the description of STIPs by combing with appearance features for action representation. Based on the previous work of the 3D shape context, we present an optimised version of 3D shape context to encode the layout information of STIPs. By combining the proposed optimised 3D shape context with appearance descriptors, e.g., HOG3D and 3D gradients, we provide a more informative and discriminative description of STIPs for action classification. To validate the proposed descriptor, we have conducted extensive experiments on the KTH and the UCF YouTube datasets. The results prove that the optimised 3D shape context offers complementary information to appearance features, showing its effectiveness for action representation; moreover, the proposed descriptor yields comparable results with the state-of-the-art methods.