Generating Sub-Resolution Detail in Images and Volumes Using Constrained Texture Synthesis

  • Authors:
  • Lujin Wang;Klaus Mueller

  • Affiliations:
  • Stony Brook University;Stony Brook University

  • Venue:
  • VIS '04 Proceedings of the conference on Visualization '04
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

A common deficiency of discretized datasets is that detail beyond the resolution of the dataset has been irrecoverably lost. This lack of detail becomes immediately apparent once one attempts to zoom into the dataset and only recovers blur. Here, we describe a method that generates the missing detail from any available and plausible high-resolution data, using texture synthesis. Since the detail generation process is guided by the underlying image or volume data and is designed to fill in plausible detail in accordance with the coarse structure and properties of the zoomed-in neighborhood, we refer to our method as constrained texture synthesis. Regular zooms become "semantic zooms", where each level of detail stems from a data source attuned to that resolution. We demonstrate our approach by a medical application 驴 the visualization of a human liver 驴 but its principles readily apply to any scenario, as long as data at all resolutions are available. We will first present a 2D viewing application, called the "virtual microscope", and then extend our technique to 3D volumetric viewing.