Multiple-view Video Coding Using Depth Map in Projective Space

  • Authors:
  • Nina Yorozu;Yuko Uematsu;Hideo Saito

  • Affiliations:
  • Keio University, Yokohama, Japan 223-8522;Keio University, Yokohama, Japan 223-8522;Keio University, Yokohama, Japan 223-8522

  • Venue:
  • ISVC '09 Proceedings of the 5th International Symposium on Advances in Visual Computing: Part II
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper a new video coding by using multiple uncalibrated cameras is proposed. We consider the redundancy between the cameras view points and efficiently compress based on a depth map. Since our target videos are taken with uncalibrated cameras, our depth map is computed not in the real world but in the Projective Space. This is a virtual space defined by projective reconstruction of two still images. This means that the position in the space is correspondence to the depth value. Therefore we do not require full-calibration of the cameras. Generating the depth map requires finding the correspondence between the cameras. We use a "plane sweep" algorithm for it. Our method needs only a depth map except for the original base image and the camera parameters, and it contributes to the effectiveness of compression.