Visual attention models for producing high fidelity graphics efficiently

  • Authors:
  • Alan Chalmers;Kirsten Cater;David Maflioli

  • Affiliations:
  • Univeristy of Bristol, UK;Univeristy of Bristol, UK;Univeristy of Bristol, UK

  • Venue:
  • SCCG '03 Proceedings of the 19th spring conference on Computer graphics
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Despite the ready availability of modern high performance graphics cards, the complexity of the scenes being modelled and the realism required of the images means that rendering high fidelity computer images is still simply not possible in a reasonable, let alone real-time. Knowing that it is a human that will be looking at the resultant images can be exploited to significantly reduce the computation time required for high fidelity graphical images, for although the human visual system is good, it does have limitations. The key is knowing where the user will be looking in the image.This paper describes high level task maps and low level saliency maps. For a large number of applications, these visual attention models can indeed determine where the user will be looking in scene with high accuracy. This information is then used to selectively render different parts of a complex scene at different qualities. We show that viewers performing a known visual task within the environment consistently fail to notice the difference in rendering quality between benchmark high quality images and the selectively rendered images that were rendered at a fraction of the computational cost.