The Representation and Recognition of Human Movement Using Temporal Templates
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Recognizing Action at a Distance
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Simultaneous Tracking and Action Recognition using the PCA-HOG Descriptor
CRV '06 Proceedings of the The 3rd Canadian Conference on Computer and Robot Vision
SCIA '09 Proceedings of the 16th Scandinavian Conference on Image Analysis
Action recognition for surveillance applications using optic flow and SVM
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part II
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
Human action recognition is an important problem in computer vision. Most existing techniques use all the video frames for action representation, which leads to high computational cost. Different from these techniques, we present a novel action recognition approach by describing the action with a few frames of representative poses, namely kPose. Firstly, a set of pose templates corresponding to different pose classes are learned based on a newly proposed Pose-Weighted Distribution Model (PWDM). Then, a local set of kPoses describing an action are extracted by clustering the poses belonging to the action. Thirdly, a further kPose selection is carried out to remove the redundant poses among the different local sets, which leads to a global set of kPoses with the least redundancy. Finally, a sequence of kPoses is obtained to describe the action by searching the nearest kPose in the global set. And the proposed action classification is carried out by comparing the obtained pose sequence with each local set of kPose. The experimental results validate the proposed method by remarkable recognition accuracy.