Simulating humans: computer graphics animation and control
Simulating humans: computer graphics animation and control
The visual analysis of human movement: a survey
Computer Vision and Image Understanding
Human motion analysis: a review
Computer Vision and Image Understanding
Looking at People: Sensing for Ubiquitous and Wearable Computing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
A survey of computer vision-based human motion capture
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Gesture recognition using the Perseus architecture
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Dynamical system representation, generation, and recognition of basic oscillatory motion gestures
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference
IEEE Transactions on Pattern Analysis and Machine Intelligence
Acquiring and validating motion qualities from live limb gestures
Graphical Models
Inertial Sensed Ego-motion for 3D Vision
Journal of Robotic Systems
Human robot interaction based on Bayesian analysis of human movements
EPIA'07 Proceedings of the aritficial intelligence 13th Portuguese conference on Progress in artificial intelligence
Gesture recognition using a marionette model and dynamic bayesian networks (DBNs)
ICIAR'06 Proceedings of the Third international conference on Image Analysis and Recognition - Volume Part II
Hi-index | 0.00 |
This paper presents a novel approach to analyze the appearance of human motions with a simple model i.e. mapping the motions using a virtual marionette model. The approach is based on a robot using a monocular camera to recognize the person interacting with the robot and start tracking its head and hands. We reconstruct 3-D trajectories from 2-D image space (IS) by calibrating and fusing the camera images with data from an inertial sensor, applying general anthropometric data and restricting the motions to lie on a plane. Through a virtual marionette model we map 3-D trajectories to a feature vector in the marionette control space (MCS). This implies inversely that now a certain set of 3-D motions can be performed by the (virtual) marionette system. A subset of these motions are considered to convey information (i.e. gestures). Thus, we are aiming to build up a database which keeps the vocabulary of gestures represented as signals in the MCS. The main contribution of this work is the computational model of the IS-MCS-Mapping. We introduce the guide robot “Nicole” to place our system in an embodied context. We sketch two novel approaches to represent human motion (i.e. Marionette Space and Labananalysis). We define a gesture vocabulary organized in three sets (i.e. Cohen’s Gesture Lexicon, Pointing Gestures and Other Gestures).