Feature-based image metamorphosis
SIGGRAPH '92 Proceedings of the 19th annual conference on Computer graphics and interactive techniques
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Controlled animation of video sprites
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Motion texture: a two-level statistical model for character motion synthesis
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Animating animal motion from still
ACM SIGGRAPH Asia 2008 papers
Proceedings of the 2009 symposium on Interactive 3D graphics and games
Video SnapCut: robust video object cutout using localized classifiers
ACM SIGGRAPH 2009 papers
Modeling temporal structure of decomposable motion segments for activity classification
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
Hi-index | 0.00 |
Video texture is an appealing method to extract and replay natural human motion from video shots. There have been much research on video texture analysis, generation and interactive control. However, the video sprites created by existing methods are typically restricted to constant depths, so that the motion diversity is strongly limited. In this paper, we propose a novel depth-varying human video sprite synthesis method, which significantly increases the degrees of freedom of human video sprite. A novel image distance function encoding scale variation is proposed, which can effectively measure the human snapshots with different depths/scales and poses, so that aligning similar poses with different depths is possible. The transitions among non-consecutive frames are modeled as a 2D transformation matrix, which can effectively avoid drifting without leveraging markers or user intervention. The synthesized depth-varying human video sprites can be seamlessly inserted into new scenes for realistic video composition. A variety of challenging examples demonstrate the effectiveness of our method.