Direct point rendering on GPU

  • Authors:
  • Hiroaki Kawata;Takashi Kanai

  • Affiliations:
  • Faculty of Environmental Information, Keio University, Kanagwa, Japan;RIKEN, Integrated Volume-CAD System Research Program, Saitama, Japan

  • Venue:
  • ISVC'05 Proceedings of the First international conference on Advances in Visual Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a method for directly rendering point sets which only have positional information by using recent graphics processors (GPUs). Almost all the algorithms in our method are processed on GPU. Our point-based rendering algorithms apply an image buffer which has lower-resolution image than a frame buffer. Normal vectors are computed and various types of noises are reduced on such an image buffer. Our approach then produces high-quality images even for noisy point clouds especially acquired by 3D scanning devices. Our approach also uses splats in the actual rendering process. However, the number of points to be rendered in our method is in general less than the number of input points due to the use of selected points on an image buffer, which allows our approach to be processed faster than the previous approaches of GPU-based point rendering.