A schema-based selective rendering framework

  • Authors:
  • Alexandros Zotos;Katerina Mania;Nicholaos Mourkoussis

  • Affiliations:
  • Technical University of Crete, Greece;Technical University of Crete, Greece;University of Sussex, UK

  • Venue:
  • Proceedings of the 6th Symposium on Applied Perception in Graphics and Visualization
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Perception principles have been incorporated into rendering algorithms in order to optimize rendering computation and produce photorealistic images from a human rather than a machine point of view. In order to economize on rendering computation, selective rendering guides high level of detail to specific regions of a synthetic scene and lower quality to the remaining scene, without compromising the level of information transmitted. Scene regions that have been rendered in low and high quality can be combined to form one complete scene. Such decisions are guided by predictive attention modeling, gaze or task-based information. We propose a novel selective rendering approach which is task and gaze-independent, simulating cognitive creation of spatial hypotheses. Scene objects are rendered in varying polygon quality according to how they are associated with the context (schema) of the scene. Experimental studies in synthetic scenes have revealed that consistent objects which are expected to be found in a scene can be rendered in lower quality without affecting information uptake. Exploiting such expectations, inconsistent items which are salient require a high level of rendering detail in order for them to be perceptually acknowledged. The contribution of this paper is an innovative x3D-based selective rendering framework based on memory schemas and implemented through metadata enrichment.