Visual based human motion analysis: mapping gestures using a Puppet model

  • Authors:
  • Jörg Rett;Jorge Dias

  • Affiliations:
  • Institute of Systems and Robotics, University of Coimbra, Polo II, Coimbra, Portugal;Institute of Systems and Robotics, University of Coimbra, Polo II, Coimbra, Portugal

  • Venue:
  • EPIA'05 Proceedings of the 12th Portuguese conference on Progress in Artificial Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel approach to analyze the appearance of human motions with a simple model i.e. mapping the motions using a virtual marionette model. The approach is based on a robot using a monocular camera to recognize the person interacting with the robot and start tracking its head and hands. We reconstruct 3-D trajectories from 2-D image space (IS) by calibrating and fusing the camera images with data from an inertial sensor, applying general anthropometric data and restricting the motions to lie on a plane. Through a virtual marionette model we map 3-D trajectories to a feature vector in the marionette control space (MCS). This implies inversely that now a certain set of 3-D motions can be performed by the (virtual) marionette system. A subset of these motions are considered to convey information (i.e. gestures). Thus, we are aiming to build up a database which keeps the vocabulary of gestures represented as signals in the MCS. The main contribution of this work is the computational model of the IS-MCS-Mapping. We introduce the guide robot “Nicole” to place our system in an embodied context. We sketch two novel approaches to represent human motion (i.e. Marionette Space and Labananalysis). We define a gesture vocabulary organized in three sets (i.e. Cohen’s Gesture Lexicon, Pointing Gestures and Other Gestures).