Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Real-time Hand Pose Recognition Using Low-Resolution Depth Images
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Fast neighbor cells finding method for multiple octree representation
CIRA'09 Proceedings of the 8th IEEE international conference on Computational intelligence in robotics and automation
A survey of vision-based methods for action representation, segmentation and recognition
Computer Vision and Image Understanding
Hand gesture recognition using depth data
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Learning to interpret pointing gestures with a time-of-flight camera
Proceedings of the 6th international conference on Human-robot interaction
Comparing gesture recognition accuracy using color and depth information
Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments
Hi-index | 0.00 |
This paper presents a method of calling gesture recognition by isolating the head and hand of a caller based on octree segmentation. The recognition of calling gestures is designed here mainly for elderly to call a service robot for their service request. A big challenge to solve is how to make the calling gesture recognition work in a complex environment with crowded people, cluttered and randomly moving objects, as well as illumination variations. The approach taken here is to segment out individual people from the 3D point cloud acquired by Microsoft Kinect or ASUS Xtion Pro and detect their heads and hands in certain geometric configurations. The segmentation is done fast by representing the 3D point cloud in octree cells and clustering those octree cells connected by the neighborhood relationship. The head and hand in a certain geometric configuration are identified from the candidate regions defined with a segmented object and by detecting the shape and color evidences. Color model in HSV color space also discussed to well define the skin color model. The proposed method has been implemented and tested on "HomeMate," a service robot developed for elderly care. The result of performance evaluation is given.