Motion Editing in 3D Video Database

  • Authors:
  • Jianfeng Xu;Toshihiko Yamasaki;Kiyoharu Aizawa

  • Affiliations:
  • The University of Tokyo, Japan;The University of Tokyo, Japan;The University of Tokyo, Japan

  • Venue:
  • 3DPVT '06 Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

As the next generation of media, 3D video is attracting increased attention. 3D video is a sequence of three dimensional mesh models, captured and generated for a real dynamic object. In this paper, we present a simple framework of motion editing in 3D video database to re-use 3D video data. Our system is composed of two modules. In the first module, a motion database is automatically set up from original 3D video sequences off-line by analyzing the feature vectors of each frame. It is observed that our original 3D video sequences have a two-level temporal structure. A fine-to-coarse method is proposed to extract such a structure. 3D video is segmented into the fine-level structure by a three reference frame strategy and then is clustered into the coarse-level structure. In the second module, users can synthesize the motions to edit a new 3D video sequence online. A cost function is optimized to transit between two motions with the users' requirements. All the algorithms in the system are based on the analysis in feature vector space and the edited 3D video sequence is played using OpenGL.