Human Activity Recognition Using Multidimensional Indexing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automated extraction and parameterization of motions in large data sets
ACM SIGGRAPH 2004 Papers
A Bayesian Hierarchical Model for Learning Natural Scene Categories
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Efficient content-based retrieval of motion capture data
ACM SIGGRAPH 2005 Papers
Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Representing shape with a spatial pyramid kernel
Proceedings of the 6th ACM international conference on Image and video retrieval
Action-specific motion prior for efficient Bayesian 3D human body tracking
Pattern Recognition
Recognition and segmentation of 3-d human action using HMM and multi-class adaboost
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
3D human action recognition using spatio-temporal motion templates
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Hi-index | 0.00 |
This article describes a novel approach to the modeling of human actions in 3D. The method we propose is based on a "bag of poses" model that represents human actions as histograms of key-pose occurrences over the course of a video sequence. Actions are first represented as 3D poses using a sequence of 36 direction cosines corresponding to the angles 12 joints form with the world coordinate frame in an articulated human body model. These pose representations are then projected to three-dimensional, action-specific principal eigenspaces which we refer to as aSpaces. We introduce a method for key-pose selection based on a local-motion energy optimization criterion and we show that this method is more stable and more resistant to noisy data than other key-poses selection criteria for action recognition.