Unsupervised learning for speech motion editing

  • Authors:
  • Yong Cao;Petros Faloutsos;Frédéric Pighin

  • Affiliations:
  • University of California at Los Angeles;University of California at Los Angeles;University of Southern California

  • Venue:
  • Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new method for editing speech related facial motions. Our method uses an unsupervised learning technique, Independent Component Analysis (ICA), to extract a set of meaningful parameters without any annotation of the data. With ICA, we are able to solve a blind source separation problem and describe the original data as a linear combination of two sources. One source captures content (speech) and the other captures style (emotion). By manipulating the independent components we can edit the motions in intuitive ways.