Predictive occlusion culling for interactive rendering of large complex virtual scene

  • Authors:
  • Hua Xiong;Zhen Liu;Aihong Qin;Haoyu Peng;Xiaohong Jiang;Jiaoying Shi

  • Affiliations:
  • State Key Lab of CAD&CG, Zhejiang University, HangZhou, P.R. China;State Key Lab of CAD&CG, Zhejiang University, HangZhou, P.R. China;State Key Lab of CAD&CG, Zhejiang University, HangZhou, P.R. China;State Key Lab of CAD&CG, Zhejiang University, HangZhou, P.R. China;State Key Lab of CAD&CG, Zhejiang University, HangZhou, P.R. China;State Key Lab of CAD&CG, Zhejiang University, HangZhou, P.R. China

  • Venue:
  • VSMM'06 Proceedings of the 12th international conference on Interactive Technologies and Sociotechnical Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an efficient occlusion culling algorithm for interactive rendering of large complex virtual scene with high depth complexity. Our method exploits both spatial and temporal coherence of visibility. A space hierarchy of scene is constructed and its nodes are rendered in an approximate front-to-back order. Nodes in view frustum are inserted into one of layered node lists, called layered buffers(LBs), according to its distance to the view point. Each buffer in the LBs is rendered with hardware occlusion queries. Using a visibility predictor(VP) for each node and interleaving occlusion queries with rendering, we reduce the occlusion queries count and graphics pipeline stalls greatly. This occlusion culling algorithm can work in a conservative way for high image quality rendering or in an approximate way for time critical rendering. Experimental results of different types of virtual scene are provided to demonstrate its efficiency and generality.