Visual equivalence: towards a new standard for image fidelity

  • Authors:
  • Ganesh Ramanarayanan;James Ferwerda;Bruce Walter;Kavita Bala

  • Affiliations:
  • Cornell University;Cornell University;Cornell University;Cornell University

  • Venue:
  • ACM SIGGRAPH 2007 papers
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Efficient, realistic rendering of complex scenes is one of the grand challenges in computer graphics. Perceptually based rendering addresses this challenge by taking advantage of the limits of human vision. However, existing methods, based on predicting visible image differences, are too conservative because some kinds of image differences do not matter to human observers. In this paper, we introduce the concept of visual equivalence, a new standard for image fidelity in graphics. Images are visually equivalent if they convey the same impressions of scene appearance, even if they are visibly different. To understand this phenomenon, we conduct a series of experiments that explore how object geometry, material, and illumination interact to provide information about appearance, and we characterize how two kinds of transformations on illumination maps (blurring and warping) affect these appearance attributes. We then derive visual equivalence predictors (VEPs): metrics for predicting when images rendered with transformed illumination maps will be visually equivalent to images rendered with reference maps. We also run a confirmatory study to validate the effectiveness of these VEPs for general scenes. Finally, we show how VEPs can be used to improve the efficiency of two rendering algorithms: Light-cuts and precomputed radiance transfer. This work represents some promising first steps towards developing perceptual metrics based on higher order aspects of visual coding.