Parallel selective rendering of high-fidelity virtual environments

  • Authors:
  • K. Debattista;A. Chalmers;R. Gillibrand;P. Longhurst;G. Mastoropoulou;V. Sundstedt

  • Affiliations:
  • Department of Computer Science, University of Bristol, Bristol BS8 1UB, United Kingdom;Warwick Digital Laboratory, University of Warwick, Coventry CV4 7AL, United Kingdom;Department of Computer Science, University of Bristol, Bristol BS8 1UB, United Kingdom;Department of Computer Science, University of Bristol, Bristol BS8 1UB, United Kingdom;Department of Computer Science, University of Bristol, Bristol BS8 1UB, United Kingdom;Department of Computer Science, University of Bristol, Bristol BS8 1UB, United Kingdom

  • Venue:
  • Parallel Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

Physically-based computer graphics offers the potential of achieving high-fidelity virtual environments in which the propagation of light in real environments is accurately simulated. However, such global illumination computations for even simple scenes may take many seconds and even longer to render a single frame, currently precluding their use in any interactive virtual environment, which requires many frames per second. Parallel processing is one solution, but it simply may not be feasible to have sufficient processors available. Furthermore, even if a suitably sized cluster is present, the overheads associated with parallel rendering may begin to dominate restricting the parallel solution to a subset of the cluster. The human visual system (HVS) provides us with our visual sensory input, but while the HVS is good, it is not perfect. If we wish to invoke, in a virtual environment, the same perceptual response in viewers as if they were actually there in the real scene, then only those parts of an environment that are actually attended to by a viewer at any point of time need be calculated at the highest quality. The remainder of the image can be calculated at a much lower quality, and with much less computational expense, without the user being aware of this quality difference. This paper presents a rendering framework which exploits parallel processing and knowledge of the human visual system to provide a means of rendering high-fidelity virtual environments.