Experiences with an interactive museum tour-guide robot
Artificial Intelligence - Special issue on applications of artificial intelligence
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
Gesture recognition using the Perseus architecture
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Dynamical system representation, generation, and recognition of basic oscillatory motion gestures
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Prosody Based Co-analysis for Continuous Recognition of Coverbal Gestures
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Dynamic bayesian networks for information fusion with applications to human-computer interfaces
Dynamic bayesian networks for information fusion with applications to human-computer interfaces
Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference
IEEE Transactions on Pattern Analysis and Machine Intelligence
Inertial Sensed Ego-motion for 3D Vision
Journal of Robotic Systems
Visual based human motion analysis: mapping gestures using a Puppet model
EPIA'05 Proceedings of the 12th Portuguese conference on Progress in Artificial Intelligence
Human robot interaction based on Bayesian analysis of human movements
EPIA'07 Proceedings of the aritficial intelligence 13th Portuguese conference on Progress in artificial intelligence
Feature representations for the recognition of 3D emblematic gestures
HBU'10 Proceedings of the First international conference on Human behavior understanding
Hi-index | 0.00 |
This paper presents a framework for gesture recognition by modeling a system based on Dynamic Bayesian Networks (DBNs) from a Marionette point of view. To incorporate human qualities like anticipation and empathy inside the perception system of a social robot remains, so far an open issue. It is our goal to search for ways of implementation and test the feasibility. Towards this end we started the development of the guide robot ’Nicole’ equipped with a monocular camera and an inertial sensor to observe its environment. The context of interaction is a person performing gestures and ’Nicole’ reacting by means of audio output and motion. In this paper we present a solution to the gesture recognition task based on Dynamic Bayesian Network (DBN). We show that using a DBN is a human-like concept of recognizing gestures that encompass the quality of anticipation through the concept of prediction and update. A novel approach is used by incorporating a marionette model in the DBN as a trade-off between simple constant acceleration models and complex articulated models.