View-dependent simplification of complex urban scenes using weighted quadtrees

  • Authors:
  • Bum-Jong Lee;Jong-Seung Park;Mee Young Sung

  • Affiliations:
  • Department of Computer Science S Engineering, University of Incheon, Incheon, Republic of Korea;Department of Computer Science S Engineering, University of Incheon, Incheon, Republic of Korea;Department of Computer Science S Engineering, University of Incheon, Incheon, Republic of Korea

  • Venue:
  • ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article describes a new contribution culling method for the view-dependent real-time rendering of complex huge urban scenes. As a preprocessing step, the view frustum culling technique is used to cull away invisible objects that are outside the view frustum. For the management of the levels-of-detail, we subdivide the image regions and construct a weighted quadtree. The weight of each quadtree node is defined as the sum of weights of all objects contained in the node or its child nodes. The weight of an object is proportional to the view space area of the projected object as well as the distance from the viewpoint. Hence, large buildings in the far distance are not always culled out since their contributions to the rendering quality can be larger than those of near small buildings. We tested the proposed method by applying it to render a huge number of structures in our metropolitan section which is currently under development. Experimental results showed that the proposed rendering method guarantees real-time rendering of complex huge scenes.