Efficient reflectance models for vision and graphics

  • Authors:
  • Todd Zickler;Fabiano Segadaes Romeiro

  • Affiliations:
  • Harvard University;Harvard University

  • Venue:
  • Efficient reflectance models for vision and graphics
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Different surfaces reflect light in different ways, and this leads to distinctive lightness, gloss, sheen, haze, and so on. Thus, like shape and color, surface reflectance plays a significant role in characterizing objects. Reflectance is important to both computer vision, which seeks to infer scene information from images, and computer graphics, which seeks to create synthetic images with realism. Indeed, in the former case scene reflectance is a key component of any image, and in the latter, accurate surface reflectance is a pre-requisite to recreate realistic scenes. The way reflectance is modeled can have a tremendous impact in the end result in both applications. In graphics, there is a clear tradeoff between the computational cost of a rendering system, its realism, and its ability to support a broad class of materials. In vision case, there is a tradeoff between the accuracy of the reflectance model and one's ability to use it for tractable inference. In both cases, efficiently modeling reflectance is the key to achieving good tradeoffs between accuracy, computational complexity and generality of materials. This thesis makes two main contributions. First, we devise efficient reflectance models that are well-suited for graphics and vision applications, in the sense of allowing one to handle a more general class of materials without prohibitive computation. Second, by devising methods that use these models efficiently, we advance the state of the art in both material recognition (vision) and interactive rendering (graphics). In the vision case, our methods allow one to infer reflectance "in the wild", using as little as a single picture of a known shape, in an unknown, real-world lighting environment. In the graphics case, our approach leads to an interactive system for simultaneously editing the lighting, reflectance, and viewpoint of a complex scene while supporting a much broader class of materials than existing interactive systems.