Perceptual rendering of participating media

  • Authors:
  • Veronica Sundstedt;Diego Gutierrez;Oscar Anson;Francesco Banterle;Alan Chalmers

  • Affiliations:
  • University of Bristol, Bristol, UK;University of Zaragoza, Zaragoza, Spain;University of Zaragoza, Zaragoza, Spain;University of Bristol, Bristol, UK;University of Bristol, Bristol, UK

  • Venue:
  • ACM Transactions on Applied Perception (TAP)
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

High-fidelity image synthesis is the process of computing images that are perceptually indistinguishable from the real world they are attempting to portray. Such a level of fidelity requires that the physical processes of materials and the behavior of light are accurately simulated. Most computer graphics algorithms assume that light passes freely between surfaces within an environment. However, in many applications, we also need to take into account how the light interacts with media, such as dust, smoke, fog, etc., between the surfaces. The computational requirements for calculating the interaction of light with such participating media are substantial. This process can take many hours and rendering effort is often spent on computing parts of the scene that may not be perceived by the viewer. In this paper, we present a novel perceptual strategy for physically based rendering of participating media. By using a combination of a saliency map with our new extinction map (X map), we can significantly reduce rendering times for inhomogeneous media. The visual quality of the resulting images is validated using two objective difference metrics and a subjective psychophysical experiment. Although the average pixel errors of these metric are all less than 1%, the subjective validation indicates that the degradation in quality still is noticeable for certain scenes. We thus introduce and validate a novel light map (L map) that accounts for salient features caused by multiple light scattering around light sources.