Microcontroller and sensors based gesture vocalizer
ISPRA'08 Proceedings of the 7th WSEAS International Conference on Signal Processing, Robotics and Automation
Segmentation and Tracking for Vision Based Human Robot Interaction
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 03
Decision making in assistive environments using multimodal observations
Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments
Multiple people gesture recognition for human-robot interaction
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
Real-time vision based gesture recognition for human-robot interaction
KES'07/WIRN'07 Proceedings of the 11th international conference, KES 2007 and XVII Italian workshop on neural networks conference on Knowledge-based intelligent information and engineering systems: Part I
Acoustical implicit communication in human-robot interaction
Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments
Multifactor feature extraction for human movement recognition
Computer Vision and Image Understanding
Flying head: a head motion synchronization mechanism for unmanned aerial vehicle control
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.00 |
An intelligent robot requires natural interaction with humans. Visual interpretation of gestures can be useful in accomplishing natural Human-Robot Interaction (HRI). Previous HRI researches were focused on issues such as hand gesture, sign language, and command gesture recognition. However, automatic recognition of whole body gestures is required in order to operate HRI naturally. This can be a challenging problem because describing and modeling meaningful gesture patterns from whole body gestures are complex tasks. This paper presents a new method for spotting and recognizing whole body key gestures at the same time on a mobile robot. Our method is simultaneously used with other HRI approaches such as speech recognition, face recognition, and so forth. In this regard, both of execution speed and recognition performance should be considered. For efficient and natural operation, we used several approaches at each step of gesture recognition; learning and extraction of articulated joint information, representing gesture as a sequence of clusters, spotting and recognizing a gesture with HMM. In addition, we constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile robot.