3D Object Transfer Between Non-Overlapping Videos

  • Authors:
  • Jiangjian Xiao;Xiaochun Cao;Hassan Foroosh

  • Affiliations:
  • Sarnoff Corporation;University of Central Florida;University of Central Florida

  • Venue:
  • VR '06 Proceedings of the IEEE conference on Virtual Reality
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Given two video sequences of different scenes acquired with moving cameras, it is interesting to seamlessly transfer a 3D object from one sequence to the other. In this paper, we present a video-based approach to extract the alpha mattes of rigid or approximately rigid 3D objects from one or more source videos, and then geometrycorrectly transfer them into another target video of a different scene. Our framework builds upon techniques in camera pose estimation, 3D spatiotemporal video alignment, depth recovery, key-frame editing, natural video matting, and image-based rendering. Based on the explicit camera pose estimation, the camera trajectories of the source and target videos are aligned in 3D space. Combinied with the estimated dense depth information, this allows us to significantly relieve the burdens of key-frame editing and efficiently improve the quality of video matting. During the transfer, our approach not only correctly restores the geometric deformation of the 3D object due to the different camera trajectories, but also effectively retains the soft shadow and environmental lighting properties of the object to ensure that the augmenting object is in harmony with the target scene.