SIGGRAPH '86 Proceedings of the 13th annual conference on Computer graphics and interactive techniques
Generating antialiased images at low sampling densities
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
The visible differences predictor: an algorithm for the assessment of image fidelity
Digital images and human vision
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
A model of visual masking for computer graphics
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
A perceptually based adaptive sampling algorithm
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A perceptually based physical error metric for realistic image synthesis
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments
ACM Transactions on Graphics (TOG)
Principles of Digital Image Synthesis
Principles of Digital Image Synthesis
Perception-based global illumination, rendering, and animation techniques
SCCG '02 Proceedings of the 18th spring conference on Computer graphics
Perceptually-driven decision theory for interactive realistic rendering
ACM Transactions on Graphics (TOG)
Global Illumination Techniques for the Simulation of Participating Media
Proceedings of the Eurographics Workshop on Rendering Techniques '97
Detail to attention: exploiting visual tasks for selective rendering
EGRW '03 Proceedings of the 14th Eurographics workshop on Rendering
Visual attention for efficient high-fidelity graphics
Proceedings of the 21st spring conference on Computer graphics
A GPU based saliency map for high-fidelity selective rendering
AFRIGRAPH '06 Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
Analytic Antialiasing for Selective High Fidelity Rendering
SIBGRAPI '05 Proceedings of the XVIII Brazilian Symposium on Computer Graphics and Image Processing
Component-Based Adaptive Sampling
SIBGRAPI '05 Proceedings of the XVIII Brazilian Symposium on Computer Graphics and Image Processing
Non-linear volume photon mapping
EGSR'05 Proceedings of the Sixteenth Eurographics conference on Rendering Techniques
Visual equivalence: towards a new standard for image fidelity
ACM SIGGRAPH 2007 papers
Perceptual rendering of participating media
ACM Transactions on Applied Perception (TAP)
Modeling light scattering for virtual heritage
Journal on Computing and Cultural Heritage (JOCCH)
Perceptual considerations for motion blur rendering
ACM Transactions on Applied Perception (TAP)
Perceptual importance of lighting phenomena in rendering of animated water
ACM Transactions on Applied Perception (TAP)
Hi-index | 0.00 |
Realistic image synthesis is the process of computing photorealistic images which are perceptually and measurably indistinguishable from real-world images. In order to obtain high fidelity rendered images it is required that the physical processes of materials and the behavior of light are accurately modelled and simulated. Most computer graphics algorithms assume that light passes freely between surfaces within an environment. However, in many applications, ranging from evaluation of exit signs in smoke filled rooms to design of efficient headlamps for foggy driving, realistic modelling of light propagation and scattering is required. The computational requirements for calculating the interaction of light with such participating media are substantial. This process can take many minutes or even hours. Many times rendering efforts are spent on computing parts of the scene that will not be perceived by the viewer. In this paper we present a novel perceptual strategy for physically-based rendering of participating media. By using a combination of a saliency map with our new extinction map (X-map) we can significantly reduce rendering times for inhomogenous media. We also validate the visual quality of the resulting images using two objective difference metrics and a subjective psychophysical experiment. Although the average pixel errors of these metric are all less than 1%, the experiment using human observers indicate that these degradation in quality is still noticeable in certain scenes, unlike previous work has suggested.