Layered point clouds: a simple and efficient multiresolution structure for distributing and rendering gigantic point-sampled models

  • Authors:
  • Enrico Gobbetti;Fabio Marton

  • Affiliations:
  • CRS4 Visual Computing Group, POLARIS Edificio 1, 09010 Pula (CA), Italy;CRS4 Visual Computing Group, POLARIS Edificio 1, 09010 Pula (CA), Italy

  • Venue:
  • Computers and Graphics
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We recently introduced an efficient multiresolution structure for distributing and rendering very large point sampled models on consumer graphics platforms [1]. The structure is based on a hierarchy of precomputed object-space point clouds, that are combined coarse-to-fine at rendering time to locally adapt sample densities according to the projected size in the image. The progressive block based refinement nature of the rendering traversal exploits on-board caching and object based rendering APIs, hides out-of-core data access latency through speculative prefetching, and lends itself well to incorporate backface, view frustum, and occlusion culling, as well as compression and view-dependent progressive transmission. The resulting system allows rendering of complex out-of-core models at high frame rates (over 60M rendered points/second), supports network streaming, and is fundamentally simple to implement. We demonstrate the efficiency of the approach on a number of very large models, stored on local disks or accessed through a consumer level broadband network, including a massive 234M samples isosurface generated by a compressible turbulence simulation and a 167M samples model of Michelangelo's St. Matthew. Many of the details of our framework were presented in a previous study. We here provide a more thorough exposition, but also significant new material, including the presentation of a higher quality bottom-up construction method and additional qualitative and quantitative results.