Efficient multimodality volume fusion using graphics hardware

  • Authors:
  • Helen Hong;Juhee Bae;Heewon Kye;Yeong Gil Shin

  • Affiliations:
  • School of Computer Science and Engineering, BK21: Information Technology, Seoul National University;School of Computer Science and Engineering, Seoul National University, Seoul, Korea;School of Computer Science and Engineering, Seoul National University, Seoul, Korea;School of Computer Science and Engineering, Seoul National University, Seoul, Korea

  • Venue:
  • ICCS'05 Proceedings of the 5th international conference on Computational Science - Volume Part III
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a novel technique of multimodality volume fusion using graphics hardware that solves the depth cueing problem with less time consumption. Our method consists of three steps. First, it takes two volumes and generates sample planes orthogonal to the viewing direction following 3D texture mapping volume rendering. Second, it composites textured slices each from different modalities with several compositing operations. Third, alpha blending for all the slices is performed. For the efficient volume fusion, a pixel program is written in HLSL(High Level Shader Language). Experimental results show that our hardware-accelerated method distinguishes the depth of overlapping region of the volume and renders them much faster than conventional ones on software.