The visible differences predictor: an algorithm for the assessment of image fidelity
Digital images and human vision
A model of visual adaptation for realistic image synthesis
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
A multiscale model of adaptation and spatial vision for realistic image display
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
A perceptually based adaptive sampling algorithm
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Rendering with radiance: the art and science of lighting visualization
Rendering with radiance: the art and science of lighting visualization
A perceptually based physical error metric for realistic image synthesis
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Measuring and predicting visual fidelity
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Perception-guided global illumination solution for animation rendering
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments
ACM Transactions on Graphics (TOG)
Selective quality rendering by exploiting human inattentional blindness: looking but not seeing
VRST '02 Proceedings of the ACM symposium on Virtual reality software and technology
Comparing Real & Synthetic Scenes using Human Judgements of Lightness
Proceedings of the Eurographics Workshop on Rendering Techniques 2000
Perceptually-Driven Simplification for Interactive Rendering
Proceedings of the 12th Eurographics Workshop on Rendering Techniques
Detail to attention: exploiting visual tasks for selective rendering
EGRW '03 Proceedings of the 14th Eurographics workshop on Rendering
The influence of sound effects on the perceived smoothness of rendered animations
APGV '05 Proceedings of the 2nd symposium on Applied perception in graphics and visualization
Auditory bias of visual attention for perceptually-guided selective rendering of animations
GRAPHITE '05 Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia
A GPU based saliency map for high-fidelity selective rendering
AFRIGRAPH '06 Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
The effect of memory schemas on object recognition in virtual environments
Presence: Teleoperators and Virtual Environments - Special issue: Immersive projection technology
Applying computational tools to predict gaze direction in interactive visual environments
ACM Transactions on Applied Perception (TAP)
Saliency in motion: selective rendering of dynamic virtual environments
Proceedings of the 25th Spring Conference on Computer Graphics
Hi-index | 0.00 |
A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of “realism in real time” in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2ˆ foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritized based on the saliency of the objects in the scene or the task the user is performing. Such “glimpses” of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed. Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics work has shown that both fixed viewpoint and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high-saliency movement that may be present. A high-saliency movement can be generated in a scene if an otherwise static objects starts moving. In this article, we investigate, through psychophysical experiments, including eye-tracking, the perception of rendering quality in dynamic complex scenes based on the introduction of a moving object in a scene. Two types of object movement are investigated: (i) rotation in place and (ii) rotation combined with translation. These were chosen as the simplest movement types. Future studies may include movement with varied acceleration. The object's geometry and location in the scene are not salient. We then use this information to guide our high-fidelity selective renderer to produce perceptually high-quality images at significantly reduced computation times. We also show how these results can have important implications for virtual environment and computer games applications.