Relation-aware spreadsheets for multimodal volume segmentation and visualization

  • Authors:
  • Lin Zheng;Yingcai Wu;Kwan-Liu Ma

  • Affiliations:
  • Department of Computer Science, The University of California, Davis;Department of Computer Science, The University of California, Davis;Department of Computer Science, The University of California, Davis

  • Venue:
  • MLMI'10 Proceedings of the First international conference on Machine learning in medical imaging
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal volume data commonly found in medical imaging applications present both opportunities and challenges to segmentation and visualization tasks. This paper presents a user directed volume segmentation system. Through a spreadsheets interface, the user can interactively examine and refine segmentation results obtained from automatic clustering. In addition, the user can isolate or highlight a feature of interest in a volume based on different modalities, and see the corresponding segmented results. Our system is easy to use since the preliminary segmentation results are organized and presented to the user in a relation-aware fashion based on the spatial relations between the segmented regions. We demonstrate this system using two multimodal datasets.