Designing motion graphs for video synthesis by tracking 2D feature points

  • Authors:
  • Jun Kobayashi;Shigeo Takahashi

  • Affiliations:
  • The University of Tokyo;The University of Tokyo

  • Venue:
  • ACM SIGGRAPH ASIA 2009 Sketches
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an intuitive and straightforward method for synthesizing videos by manipulating objects without 3D models. Video synthesis is still costly when generating the realistic motion of 3D models directly. The motion graph [Kovar et al. 2002] is a novel method that creates realistic and controllable motion from examples, while its associated motion data must be obtained beforehand using capture devices that are still expensive. On the other hand, Schödl et al. [2002] defined a video object segmented from a video frame as a "video sprite", and created controllable animations using 2D motion graphs, where the nodes correspond to the extracted video sprites and the edges represent temporal transitions between similar sprites. While their approach can create animations without 3D models, users can only control positions or animation paths of objects. Our primary contribution lies in a novel 2D motion graph search algorithm by feature points tracking, which enables us to control detailed motions of a video object through the screen space directly.