Multimodal volume rendering with 3D textures

  • Authors:
  • Pascual Abellán;Dani Tost

  • Affiliations:
  • Universitat Politècnica de Catalunya, Spain;Universitat Politècnica de Catalunya, Spain

  • Venue:
  • Computers and Graphics
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we describe a volume rendering application for multimodal datasets based on 3D texture mapping. Our method takes as input two pre-registered voxel models and constructs two 3D textures. It renders the multimodal data by depth compositing view-aligned texture slices of the model. For each texel of a slice, it performs a fetch to each 3D texture and performs fusion and shading using a fragment shader. The application allows users to choose either emission and absorption shading or surface shading for each model. Shading is implemented by using two auxiliary 1D textures for each transfer function. Moreover, data fusion takes into account the presence of surfaces and the specific values that are merged, so that the weight of each modality in fusion is not constant but defined through a 2D transfer function implemented as a 2D texture. This method is very fast and versatile and it provides a good insight into multimodal data.