Video narrative authoring with motion inpainting

  • Authors:
  • Timothy K. Shih;Joseph C. Tsai;Kuan-Ching Li

  • Affiliations:
  • National Central University, Taoyuan County, Taiwan Roc;Tamkang University, Taipei County, Taiwan Roc;Providence University, Taichung County, Taiwan Roc

  • Venue:
  • Proceedings of the 1st ACM international workshop on Multimodal pervasive video analysis
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Storytelling and narrative creation are recent interests in the areas of interactive media designs. Instead of using virtual reality-based 3-D models, we propose a system which uses video technologies to generate video story from existing avatars and videos, with moderate avatar control technologies. The user only needs to involve in two steps: (1) select a background video as video scene; and (2) pick an "object track" and set up its trajectory. In order to plan for a realistic narrative, several issues such as computing relative sizes of avatars, object pasting, and video scene generating are considered. We use an algorithm for aligning motion of object tracks in order to make object movement smoother. For producing a video static scene including calibrating all layers, we maintain a motion map for each video frame and use the maps as guidance when removing objects in video and combining all frames back to a video scene. To generate a dynamic background, we proposed motion inpainting to create more dynamic textures and insert the new patch into inpainting area to generate a new dynamic background. An authoring tool equipped with special functions to integrate different motion tracks for the generation of video narratives is also presented in this paper.