Recognizing Action at a Distance
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Recognizing Human Actions: A Local SVM Approach
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Free viewpoint action recognition using motion history volumes
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
Behavior recognition via sparse spatio-temporal features
ICCCN '05 Proceedings of the 14th International Conference on Computer Communications and Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
An efficient Bayesian framework for on-line action recognition
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Recognizing human action from a far field of view
WMVC'09 Proceedings of the 2009 international conference on Motion and video computing
HMM based action recognition with projection histogram features
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
Action recognition in video by sparse representation on covariance manifolds of silhouette tunnels
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
Variations of a hough-voting action recognition system
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
Using a Product Manifold distance for unsupervised action recognition
Image and Vision Computing
Sparse Modeling of Human Actions from Motion Imagery
International Journal of Computer Vision
Mid-level features and spatio-temporal context for activity recognition
Pattern Recognition
Predicting human activities using spatio-temporal structure of interest points
Proceedings of the 20th ACM international conference on Multimedia
Trajectory signature for action recognition in video
Proceedings of the 20th ACM international conference on Multimedia
Spatio-Temporal phrases for activity recognition
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Vector field analysis for multi-object behavior modeling
Image and Vision Computing
Exploring dense trajectory feature and encoding methods for human interaction recognition
Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service
Exploring STIP-based models for recognizing human interactions in TV videos
Pattern Recognition Letters
Spatio-temporal layout of human actions for improved bag-of-words action detection
Pattern Recognition Letters
Machine Vision and Applications
Hi-index | 0.00 |
This paper summarizes results of the 1st Contest on Semantic Description of Human Activities (SDHA), in conjunction with ICPR 2010. SDHA 2010 consists of three types of challenges, High-level Human Interaction Recognition Challenge, Aerial View Activity Classification Challenge, and Wide-Area Activity Search and Recognition Challenge. The challenges are designed to encourage participants to test existing methodologies and develop new approaches for complex human activity recognition scenarios in realistic environments. We introduce three new public datasets through these challenges, and discuss results of the stateof-the-art activity recognition systems designed and implemented by the contestants. A methodology using a spatio-temporal voting [19] successfully classified segmented videos in the UT-Interaction datasets, but had a difficulty correctly localizing activities from continuous videos. Both the method using local features [10] and the HMM based method [18] recognized actions from low-resolution videos (i.e. UT-Tower dataset) successfully. We compare their results in this paper.