Deep screen space

  • Authors:
  • Oliver Nalbach;Tobias Ritschel;Hans-Peter Seidel

  • Affiliations:
  • MPI Informatik;MPI Informatik and Saarland University/MMCI;MPI Informatik

  • Venue:
  • Proceedings of the 18th meeting of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computing shading such as ambient occlusion (AO), subsurface scattering (SSS) or indirect light (GI) in screen space has recently received a lot of attention. While being efficient to compute, screen space methods have several key limitations such as occlusions, culling, under-sampling of oblique geometry and locality of the transport. In this work we propose a deep screen space to overcome all these problems while retaining computational efficiency. Instead of projecting, culling, shading, rasterizing and resolving occlusions of primitives using a z-buffer, we adaptively tessellate them into surfels proportional to the primitive's projected size, which are optionally shaded and stored on-GPU as an unstructured surfel cloud. Objects closer to the camera receive more details, like in classic framebuffers, but are not affected by occlusions or viewing angle. This surfel cloud can then be used to compute shading. Instead of gathering, we propose to use splatting to a multi-resolution interleaved framebuffer. This allows to exchange detailed shading between pixels close to a surfel and approximate shading between pixels distant to a surfel.