View-independent human action recognition with Volume Motion Template on single stereo camera
Pattern Recognition Letters
Hand trajectory-based gesture spotting and recognition using HMM
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Real-time 3D pointing gesture recognition for mobile robots with cascade HMM and particle filter
Image and Vision Computing
Computer Vision and Image Understanding
Multifactor feature extraction for human movement recognition
Computer Vision and Image Understanding
Study on an assistive robot for improving imitation skill of children with autism
ICSR'10 Proceedings of the Second international conference on Social robotics
Incremental learning of primitive skills from demonstration of a task
Proceedings of the 6th international conference on Human-robot interaction
Gesture recognition by stereo vision
Proceedings of the First International Conference on Intelligent Interactive Technologies and Multimedia
Feature fusion for 3D hand gesture recognition by learning a shared hidden space
Pattern Recognition Letters
Human sign recognition for robot manipulation
MCPR'12 Proceedings of the 4th Mexican conference on Pattern Recognition
Attention Based Detection and Recognition of Hand Postures Against Complex Backgrounds
International Journal of Computer Vision
Vision-based arm gesture recognition for a long-range human---robot interaction
The Journal of Supercomputing
Hi-index | 0.00 |
Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRI). Previous HRI research focused on issues such as hand gestures, sign language, and command gesture recognition. Automatic recognition of whole-body gestures is required in order for HRI to operate naturally. This presents a challenging problem, because describing and modeling meaningful gesture patterns from whole-body gestures is a complex task. This paper presents a new method for recognition of whole-body key gestures in HRI. A human subject is first described by a set of features, encoding the angular relationship between a dozen body parts in 3-D. A feature vector is then mapped to a codeword of hidden Markov models. In order to spot key gestures accurately, a sophisticated method of designing a transition gesture model is proposed. To reduce the states of the transition gesture model, model reduction which merges similar states based on data-dependent statistics and relative entropy is used. The experimental results demonstrate that the proposed method can be efficient and effective in HRI, for automatic recognition of whole-body key gestures from motion sequences