Automating expressive locomotion generation

  • Authors:
  • Yejin Kim;Michael Neff

  • Affiliations:
  • Department of Computer Science and Program for Technocultural Studies, University of California, Davis, CA;Department of Computer Science and Program for Technocultural Studies, University of California, Davis, CA

  • Venue:
  • Transactions on Edutainment VII
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces a system for expressive locomotion generation that takes as input a set of sample locomotion clips and a motion path. Significantly, the system only requires a single sample of straight-path locomotion for each style modeled and can produce output locomotion for an arbitrary path with arbitrary motion transition points. For efficient locomotion generation, we represent each sample with a loop sequence which encapsulates its key style and utilize these sequences throughout the synthesis process. Several techniques are applied to automate the synthesis: foot-plant detection from unlabeled samples, estimation of an adaptive blending length for a natural style change, and a post-processing step for enhancing the physical realism of the output animation. Compared to previous approaches, the system requires significantly less data and manual labor, while supporting a large range of styles.