Acquisition and representation of material appearance for editing and rendering

  • Authors:
  • Szymon Rusinkiewicz;Jason Lawrence

  • Affiliations:
  • Princeton University;Princeton University

  • Venue:
  • Acquisition and representation of material appearance for editing and rendering
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Providing computer models that accurately characterize the appearance of a wide class of materials is of great interest to both the computer graphics and computer vision communities. The last ten years has witnessed a surge in techniques for measuring the appearance of real-world materials. Broadly speaking, this requires recording thousands of measurements of the way a material reflects light from different viewing directions, for different directions of incident illumination and at different points along its surface. As compared to conventional techniques that rely on hand-tuning parametric light reflectance functions, this data-driven approach is better suited for representing complex real-world appearance. However, incorporating these techniques into existing rendering algorithms and a practical production pipeline has remained an open research problem. One common approach has been to fit the parameters of an analytic reflectance function to measured appearance data. This has the benefit of providing significant compression ratios and these analytic models are already fully integrated into rendering algorithms and existing production pipelines. However, this approach can lead to significant approximation errors for many materials and it requires computationally expensive and numerically unstable non-linear optimization. An alternative approach is to compress these datasets using standard dimensionality reduction techniques like PCA, wavelet compression or matrix factorization. Although these techniques provide an accurate and compact representation, they do have several drawbacks. In particular, existing techniques do not enable efficient importance sampling for measured materials (and even some complex analytic models) in the context of physically-based rendering systems. Additionally, these representations do not allow editing. In this thesis, we introduce techniques for acquiring and representing real-world material appearance that addresses these research challenges. First, we introduce the Inverse Shade Trees (IST) framework. This is a conceptual framework for representing high-dimensional measured appearance data as a tree-structured collection of simpler masks and functions. We use it to provide an intuitive representation of the Spatially-Varying Bidirectional Reflectance Distribution Function (SVBRDF) that is automatically computed from measured data. Like other data-driven techniques, ISTs are more accurate than fitting parametric BRDFs to measured appearance data, but are intuitive enough to support direct editing. We also introduce a factored model of the BRDF optimized to support efficient importance sampling in the context of global illumination rendering. We demonstrate that our technique provides more efficient sampling than previous methods that sample a best-fit parametric model. Lastly, we introduce a representation suitable for compressing and sampling non-parametric, functions of arbitrary dimensions. We show this representation is useful for sampling image-based illumination and reflectance within physically-based rendering algorithms.