A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Segmenting motion capture data into distinct behaviors
GI '04 Proceedings of the 2004 Graphics Interface Conference
Form: an experiment in the annotation of the kinetics of gesture
Form: an experiment in the annotation of the kinetics of gesture
Automatic splicing for hand and body animations
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
Hi-index | 0.00 |
Virtual agent research on gesture is increasingly relying on data-driven algorithms, which require large corpora to be effectively trained. This work presents a method for automatically segmenting human motion into gesture phases based on input motion capture data. By reducing the need for manual annotation, the method allows gesture researchers to more easily build large corpora for gesture analysis and animation modeling. An effective rule set has been developed for identifying gesture phase boundaries using both joint angle and positional data of the fingers and hands. A set of Support Vector Machines trained from a database of annotated clips, is used to classify the type of each detected phase boundary into stroke, preparation or retraction. The approach has been tested on motion capture data obtained from different people with varied gesturing styles and in different moods and the results give us an indication of the extent to which variation in gesturing style affects the accuracy of segmentation.