The Recognition of Human Movement Using Temporal Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Coupled hidden Markov models for complex action recognition
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Human Motion: Modeling and Recognition of Actions and Interactions
3DPVT '04 Proceedings of the 3D Data Processing, Visualization, and Transmission, 2nd International Symposium
International Journal of Computer Vision
Beyond Tracking: Modelling Activity and Understanding Behaviour
International Journal of Computer Vision
A survey of advances in vision-based human motion capture and analysis
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
Vision-based human motion analysis: An overview
Computer Vision and Image Understanding
Temporal motion recognition and segmentation approach
International Journal of Imaging Systems and Technology - Contemporary Challenges in Combinatorial Image Analysis
Motion history image: its variants and applications
Machine Vision and Applications
Hi-index | 0.00 |
This work presents an approach for recognizing 3D human gestures by using depth images. The proposed motion trail model (MTM) consists of both motion information and static posture information over the gesture sequence along the xoy-plane. By projecting depth images onto other two planes in 3D space, gestures can be represented with complementary information from additional planes. Accordingly 2D-MTM can be extended into 3D space in addition to the lateral scene parallel to the image plane to generate 3D-MTM. The Histogram of Oriented Gradient (HOG) is then extracted from the proposed 3D-MTM as the feature descriptor. The final recognition of gestures is performed through maximum correlation coefficient. The preliminary results demonstrate the average error rate decreases from 62.80% of baseline method to 21.74% after using the proposed approach on Chalearn gesture dataset.