Multi-level Ground Glass Nodule Detection and Segmentation in CT Lung Images

  • Authors:
  • Yimo Tao;Le Lu;Maneesh Dewan;Albert Y. Chen;Jason Corso;Jianhua Xuan;Marcos Salganicoff;Arun Krishnan

  • Affiliations:
  • CAD R&D, Siemens Healthcare, Malvern, USA and Dept. of Electrical and Computer Engineering, Virginia Tech, Arlington, USA;CAD R&D, Siemens Healthcare, Malvern, USA;CAD R&D, Siemens Healthcare, Malvern, USA;CAD R&D, Siemens Healthcare, Malvern, USA and Dept. of CSE, University at Buffalo SUNY, Buffalo, USA;Dept. of CSE, University at Buffalo SUNY, Buffalo, USA;Dept. of Electrical and Computer Engineering, Virginia Tech, Arlington, USA;CAD R&D, Siemens Healthcare, Malvern, USA;CAD R&D, Siemens Healthcare, Malvern, USA

  • Venue:
  • MICCAI '09 Proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention: Part II
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Early detection of Ground Glass Nodule (GGN) in lung Computed Tomography (CT) images is important for lung cancer prognosis. Due to its indistinct boundaries, manual detection and segmentation of GGN is labor-intensive and problematic. In this paper, we propose a novel multi-level learning-based framework for automatic detection and segmentation of GGN in lung CT images. Our main contributions are: firstly, a multi-level statistical learning-based approach that seamlessly integrates segmentation and detection to improve the overall accuracy for GGN detection (in a subvolume). The classification is done at two levels, both voxel-level and object-level. The algorithm starts with a three-phase voxel-level classification step, using volumetric features computed per voxel to generate a GGN class-conditional probability map. GGN candidates are then extracted from this probability map by integrating prior knowledge of shape and location, and the GGN object-level classifier is used to determine the occurrence of the GGN. Secondly, an extensive set of volumetric features are used to capture the GGN appearance. Finally, to our best knowledge, the GGN dataset used for experiments is an order of magnitude larger than previous work. The effectiveness of our method is demonstrated on a dataset of 1100 subvolumes (100 containing GGNs) extracted from about 200 subjects.