Speech driven head motion synthesis based on a trajectory model

  • Authors:
  • Gregor Hofer;Hiroshi Shimodaira;Junichi Yamagishi

  • Affiliations:
  • University of Edinburgh;University of Edinburgh;University of Edinburgh

  • Venue:
  • ACM SIGGRAPH 2007 posters
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Making human-like characters more natural and life-like requires more inventive approaches than current standard techniques such as synthesis using text features or triggers. In this poster we present a novel approach to automatically synthesise head motion based on speech features. Previous work has focused on frame wise modelling of motion [Busso et al. 2007] or has treated the speach data and motion data streams separately [Brand 1999], although the trajectories of the head motion and speech features are highly correlated and dynamically change over several frames. To model longer units of motion and speech and to reproduce their trajectories during synthesis, we utilise a promising time series stochastic model called "Trajectory Hidden Markov Models" [Zen et al. 2007]. Its parameter generation algorithm can produce motion trajectories from sequences of units of motion and speech. These two kinds of data are simultaneously modelled by using a multi-stream type of the trajectory HMMs. The models can be viewed as a Kalman-smoother-like approach, and thereby are capable of producing smooth trajectories.