Visual navigation of large environments using textured clusters
I3D '95 Proceedings of the 1995 symposium on Interactive 3D graphics
A model of visual adaptation for realistic image synthesis
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
A framework for realistic image synthesis
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
A perceptually based physical error metric for realistic image synthesis
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Perception-guided global illumination solution for animation rendering
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments
ACM Transactions on Graphics (TOG)
Selective quality rendering by exploiting human inattentional blindness: looking but not seeing
VRST '02 Proceedings of the ACM symposium on Virtual reality software and technology
Comparing Real & Synthetic Scenes using Human Judgements of Lightness
Proceedings of the Eurographics Workshop on Rendering Techniques 2000
Detail to attention: exploiting visual tasks for selective rendering
EGRW '03 Proceedings of the 14th Eurographics workshop on Rendering
Alternate feature location for rapid navigation using a 3D map on a mobile device
Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia
Perceptually driven simplification for interactive rendering
EGWR'01 Proceedings of the 12th Eurographics conference on Rendering
Hi-index | 0.01 |
The perception that we have of our world depends on the task we are currently performing in the environment, so if we are driving a car we will pay attention to the objects that are visually important to the task we are performing such as, the road, road signs, other vehicles, etc. The same is true when we explore virtual environments. The creation of high-fidelity 3D maps on mobile devices to aid navigation in urban environments is computationally very expensive, precluding achieving this quality at interactive rates. In this paper we present a case study to show how the human visual system may be exploited, when viewers are undertaking a task, to reduce the overall quality of the displayed image, without the users being aware of this reduction in quality. The displayed images are selectively rendered with the key features used to identify location and orientation in a 3D urban environment produced in high quality and the remainder of the image in low quality.