Real-time rendering of massive unstructured raw point clouds using screen-space operators

  • Authors:
  • Ruggero Pintus;Enrico Gobbetti;Marco Agus

  • Affiliations:
  • Visual Computing Group - CRS4, Italy;Visual Computing Group - CRS4, Italy;Visual Computing Group - CRS4, Italy

  • Venue:
  • VAST'11 Proceedings of the 12th International conference on Virtual Reality, Archaeology and Cultural Heritage
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Nowadays, 3D acquisition devices allow us to capture the geometry of huge Cultural Heritage (CH) sites, historical buildings and urban environments. We present a scalable real-time method to render this kind of models without requiring lengthy preprocessing. The method does not make any assumptions about sampling density or availability of normal vectors for the points. On a frame-by-frame basis, our GPU accelerated renderer computes point cloud visibility, fills and filters the sparse depth map to generate a continuous surface representation of the point cloud, and provides a screen-space shading term to effectively convey shape features. The technique is applicable to all rendering pipelines capable of projecting points to the frame buffer. To deal with extremely massive models, we integrate it within a multi-resolution out-of-core real-time rendering framework with small pre-computation times. Its effectiveness is demonstrated on a series of massive unstructured real-world Cultural Heritage datasets. The small precomputation times and the low memory requirements make the method suitable for quick onsite visualizations during scan campaigns.