A nonparametric regression model for virtual humans generation

  • Authors:
  • Yun-Feng Chou;Zen-Chung Shih

  • Affiliations:
  • Department of Computer Science, National Chiao Tung University, Hsinchu City, Taiwan;Department of Computer Science, National Chiao Tung University, Hsinchu City, Taiwan

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a novel nonparametric regression model to generate virtual humans from still images for the applications of next generation environments (NG). This model automatically synthesizes deformed shapes of characters by using kernel regression with elliptic radial basis functions (ERBFs) and locally weighted regression (LOESS). Kernel regression with ERBFs is used for representing the deformed character shapes and creating lively animated talking faces. For preserving patterns within the shapes, LOESS is applied to fit the details with local control. The results show that our method effectively simulates plausible movements for character animation, including body movement simulation, novel views synthesis, and expressive facial animation synchronized with input speech. Therefore, the proposed model is especially suitable for intelligent multimedia applications in virtual humans generation.