Perceptually guided high-fidelity rendering exploiting movement bias in visual attention

  • Authors:
  • Jasminka Hasic;Alan Chalmers;Elena Sikudova

  • Affiliations:
  • Sarajevo School of Science and Technology/University of Warwick, Sarajevo, Bosnia and Herzegovina;The Digital Laboratory, WMG University of Warwick, UK, Coventry, UK;The Digital Laboratory, WMG University of Warwick, UK, Coventry, UK

  • Venue:
  • ACM Transactions on Applied Perception (TAP)
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of “realism in real time” in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2ˆ foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritized based on the saliency of the objects in the scene or the task the user is performing. Such “glimpses” of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed. Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics work has shown that both fixed viewpoint and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high-saliency movement that may be present. A high-saliency movement can be generated in a scene if an otherwise static objects starts moving. In this article, we investigate, through psychophysical experiments, including eye-tracking, the perception of rendering quality in dynamic complex scenes based on the introduction of a moving object in a scene. Two types of object movement are investigated: (i) rotation in place and (ii) rotation combined with translation. These were chosen as the simplest movement types. Future studies may include movement with varied acceleration. The object's geometry and location in the scene are not salient. We then use this information to guide our high-fidelity selective renderer to produce perceptually high-quality images at significantly reduced computation times. We also show how these results can have important implications for virtual environment and computer games applications.