Depth-Varying human video sprite synthesis

  • Authors:
  • Wei Hua;Wenzhuo Yang;Zilong Dong;Guofeng Zhang

  • Affiliations:
  • State Key Lab of CAD&CG, Zhejiang University, Hangzhou, P.R. China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, P.R. China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, P.R. China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, P.R. China

  • Venue:
  • Transactions on Edutainment VII
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Video texture is an appealing method to extract and replay natural human motion from video shots. There have been much research on video texture analysis, generation and interactive control. However, the video sprites created by existing methods are typically restricted to constant depths, so that the motion diversity is strongly limited. In this paper, we propose a novel depth-varying human video sprite synthesis method, which significantly increases the degrees of freedom of human video sprite. A novel image distance function encoding scale variation is proposed, which can effectively measure the human snapshots with different depths/scales and poses, so that aligning similar poses with different depths is possible. The transitions among non-consecutive frames are modeled as a 2D transformation matrix, which can effectively avoid drifting without leveraging markers or user intervention. The synthesized depth-varying human video sprites can be seamlessly inserted into new scenes for realistic video composition. A variety of challenging examples demonstrate the effectiveness of our method.