Fundamentals of speech recognition
Fundamentals of speech recognition
CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
An HMM-Based Threshold Model Approach for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
A survey of computer vision-based human motion capture
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Statistical color models with application to skin detection
International Journal of Computer Vision
A Smoothing Filter for CONDENSATION
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume I - Volume I
Integrated Person Tracking Using Stereo, Color, and Pattern Detection
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Gesture Modeling and Recognition Using Finite State Machines
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Detection and Estimation of Pointing Gestures in Dense Disparity Maps
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Pointing gesture recognition based on 3D-tracking of face, hands and head orientation
Proceedings of the 5th international conference on Multimodal interfaces
Robust Real-Time Face Detection
International Journal of Computer Vision
Arm-Pointing Gesture Interface Using Surrounded Stereo Cameras System
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
A survey of advances in vision-based human motion capture and analysis
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-time pointing gesture recognition for an immersive environment
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Multiple-person tracker with a fixed slanting stereo camera
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Gesture Spotting and Recognition for Human–Robot Interaction
IEEE Transactions on Robotics
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Touch versus in-air hand gestures: evaluating the acceptance by seniors of human-robot interaction
AmI'11 Proceedings of the Second international conference on Ambient Intelligence
Tracking in object action space
Computer Vision and Image Understanding
Free-hand pointing for identification and interaction with distant objects
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
A method for hand detection using internal features and active boosting-based learning
Proceedings of the Fourth Symposium on Information and Communication Technology
Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation
Computer Vision and Image Understanding
Hi-index | 0.00 |
In this paper, we present a real-time 3D pointing gesture recognition algorithm for mobile robots, based on a cascade hidden Markov model (HMM) and a particle filter. Among the various human gestures, the pointing gesture is very useful to human-robot interaction (HRI). In fact, it is highly intuitive, does not involve a-priori assumptions, and has no substitute in other modes of interaction. A major issue in pointing gesture recognition is the difficultly of accurate estimation of the pointing direction, caused by the difficulty of hand tracking and the unreliability of the direction estimation. The proposed method involves the use of a stereo camera and 3D particle filters for reliable hand tracking, and a cascade of two HMMs for a robust estimate of the pointing direction. When a subject enters the field of view of the camera, his or her face and two hands are located and tracked using particle filters. The first stage HMM takes the hand position estimate and maps it to a more accurate position by modeling the kinematic characteristics of finger pointing. The resulting 3D coordinates are used as input into the second stage HMM that discriminates pointing gestures from other types. Finally, the pointing direction is estimated for the pointing state. The proposed method can deal with both large and small pointing gestures. The experimental results show gesture recognition and target selection rates of better than 89% and 99% respectively, during human-robot interaction.