Learning silhouette features for control of human motion

  • Authors:
  • Liu Ren;Gregory Shakhnarovich;Jessica K. Hodgins;Hanspeter Pfister;Paul Viola

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh, PA;Massachusetts Institute of Technology, Cambridge, MA;Carnegie Mellon University, Pittsburgh, PA;Mitsubishi Electric Research Laboratories, Cambridge, MA;Microsoft Research, Redmond, WA

  • Venue:
  • ACM Transactions on Graphics (TOG)
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a vision-based performance interface for controlling animated human characters. The system interactively combines information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to produce an animation of high quality. Such an interactive system might be useful for authoring, for teleconferencing, or as a control interface for a character in a game. In our implementation, the user performs in front of three video cameras; the resulting silhouettes are used to estimate his orientation and body configuration based on a set of discriminative local features. Those features are selected by a machine-learning algorithm during a preprocessing step. Sequences of motions that approximate the user's actions are extracted from the motion database and scaled in time to match the speed of the user's motion. We use swing dancing, a complex human motion, to demonstrate the effectiveness of our approach. We compare our results to those obtained with a set of global features, Hu moments, and ground truth measurements from a motion capture system.