On seeing and rendering colour gradients

  • Authors:
  • Alexa I. Ruppertsberg;Anya Hurlbert;Marina Bloj

  • Affiliations:
  • University of Bradford, Bradford, UK;Newcastle University, Newcastle upon Tyne, UK;University of Bradford, Bradford, UK

  • Venue:
  • Proceedings of the 4th symposium on Applied perception in graphics and visualization
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Ten years ago Greenberg and colleagues presented their framework for realistic image synthesis [Greenberg et al. 1997], aiming "to develop physically based lighting models and perceptually based rendering procedures for computer graphics that will produce synthetic images that are visually and measurably indistinguishable from real-world images", paraphrasing Sutherland's 'ultimate display' [Sutherland 1965]. They specifically encouraged vision researchers to use natural, complex and three-dimensional (3D) visual displays to get a better understanding of human vision and to develop more comprehensive visual models for computer graphics that will improve the efficiency of algorithms. In this paper we follow Greenberg et al.'s directive and analyse colour and luminance gradients in a complex 3D scene. The gradients arise from changes in the light source position and orientation of surfaces. Information in image gradients could apprise the visual system about intrinsic surface reflectance properties or extrinsic illumination phenomena, including shading, shadowing and inter-reflections. Colour gradients induced by inter-reflection may play a similar role to that of luminance gradients in shape-from-shading algorithms; it has been shown that 3D shape perception modulates the influence of inter-reflections on surface colour perception [Bloj et al. 1999]. Here we report a psychophysical study in which we tested whether observers were able to discriminate between gradients due to different light source positions and found that observers reliably detected a change in the gradient information when the light source position differed by only 4 deg from the reference scene (Experiment 1). This sensitivity was mainly based on the luminance information in the gradient (Experiment 2 and 3). We conclude that for a realistic impression of a scene a global illumination algorithm should model the luminance component of inter-reflections accurately, whereas it is less critical to accurately represent the spatial variation in chromaticity.