Data compression and hardware implementation of ray-space rendering for interactive augmented virtuality

  • Authors:
  • Yukio Sakagawa;Akihiro Katayama;Daisuke Kotake;Hideyuki Tamura

  • Affiliations:
  • Mixed Reality Systems Laboratory, Inc. 2-2-1 Nakane, Meguro-ku, Tokyo 152-0031, Japan;Mixed Reality Systems Laboratory, Inc. 2-2-1 Nakane, Meguro-ku, Tokyo 152-0031, Japan;Mixed Reality Systems Laboratory, Inc. 2-2-1 Nakane, Meguro-ku, Tokyo 152-0031, Japan;Mixed Reality Systems Laboratory, Inc. 2-2-1 Nakane, Meguro-ku, Tokyo 152-0031, Japan

  • Venue:
  • Presence: Teleoperators and Virtual Environments - Mixed reality
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article describes approaches to solve two drawbacks of ray-space representation: the large amount of data necessary to represent an object and the massive use of CPU to render an image. Ray-space representation, an image-based rendering technique, is used in our interactive augmented virtuality system. We developed a compression method optimized for ray-space data and a hardware architecture to render images from ray-space data. The compression method uses a hybrid combination of motion-compensated prediction, discrete cosine transform, and vector quantization. The proposed method compresses the data while assuring random and fast access to the decoded data. The dedicated hardware architecture for consumer PCs to interactively render photorealistic images using ray-space representation is also described. This hardware architecture is used to efficiently transfer the processing load from the CPU to the hardware. These two improvements help to use PCs that do not have much memory and CPU resources in applications such as an interactive virtual museum, in which the scenes are generated from both geometric model data and ray-space data.