Selective quality rendering by exploiting human inattentional blindness: looking but not seeing

  • Authors:
  • Kirsten Cater;Alan Chalmers;Patrick Ledda

  • Affiliations:
  • University of Bristol, Bristol, UK;University of Bristol, Bristol, UK;University of Bristol, Bristol, UK

  • Venue:
  • VRST '02 Proceedings of the ACM symposium on Virtual reality software and technology
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

There are two major influences on human visual attention: bottom-up and top-down processing. Bottom-up processing is the automatic direction of gaze to lively or colourful objects as determined by low-level vision. In contrast, top-down processing is consciously directed attention in the pursuit of predetermined goals or tasks. Previous work in perception-based rendering has exploited bottom-up visual attention to control detail (and therefore time) spent on rendering parts of a scene. In this paper, we demonstrate the principle of Inattentional Blindness, a major side effect of top-down processing, where portions of the scene unrelated to the specific task go unnoticed. In our experiment, we showed a pair of animations rendered at different quality levels to 160 subjects, and then asked if they noticed a change. We instructed half the subjects to simply watch our animation, while the other half performed a specific task during the animation.When parts of the scene, outside the focus of this task, were rendered at lower quality, almost none of the task-directed subjects noticed, whereas the difference was clearly visible to the control group. Our results clearly show that top-down visual processing can be exploited to reduce rendering times substantially without compromising perceived visual quality in interactive tasks.