Segmentation of hand gestures using motion capture data

  • Authors:
  • Ajay Sundar Ramakrishnan;Michael Neff

  • Affiliations:
  • University of California Davis, Davis, CA, USA;University of California Davis, Davis, CA, USA

  • Venue:
  • Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Virtual agent research on gesture is increasingly relying on data-driven algorithms, which require large corpora to be effectively trained. This work presents a method for automatically segmenting human motion into gesture phases based on input motion capture data. By reducing the need for manual annotation, the method allows gesture researchers to more easily build large corpora for gesture analysis and animation modeling. An effective rule set has been developed for identifying gesture phase boundaries using both joint angle and positional data of the fingers and hands. A set of Support Vector Machines trained from a database of annotated clips, is used to classify the type of each detected phase boundary into stroke, preparation or retraction. The approach has been tested on motion capture data obtained from different people with varied gesturing styles and in different moods and the results give us an indication of the extent to which variation in gesturing style affects the accuracy of segmentation.