Left Ventricle Segmentation from Contrast Enhanced Fast Rotating Ultrasound Images Using Three Dimensional Active Shape Models

  • Authors:
  • Meng Ma;Marijn Stralen;Johan H. Reiber;Johan G. Bosch;Boudewijn P. Lelieveldt

  • Affiliations:
  • Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands;Department of Experimental Echocardiography, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands;Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands;Department of Experimental Echocardiography, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands;Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands

  • Venue:
  • FIMH '09 Proceedings of the 5th International Conference on Functional Imaging and Modeling of the Heart
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we propose a novel segmentation technique for quantification of sparsely sampled single-beat 3D contrast enhanced echocardiographic data acquired with a Fast Rotating Ultrasound transducer (FRU). The method uses a 3D Active Shape Model of the Left Ventricle (LV) in combination with local appearance models as prior knowledge to steer the segmentation. From a set of semi-manually delineated contours, 3D meshes of the LV endocardium are constructed for different cardiac phases. Mesh surfaces are partitioned into a fixed number of regions, each of which is modeled by a local image appearance. During segmentation, model update points are generated based on similarity matches with these local appearance models in multiple curved 2D cross-sections, which are then propagated over a dense 3D mesh. The Active Shape Model effectively constrains the shape of the 3D mesh to a statistically plausible cardiac shape. Leave-one-out cross validation was carried out on single-beat contrast enhanced FRU data from 18 patients suffering from various cardiac pathologies. Experiments show that the proposed method generates segmentation results that agree with the ground truth contours with average Point to Point (P2P) error of 4.1±2.0 mm and average Point to Surface (P2S) error of 2.4±2.1mm. Convergence tests show that the proposed method is capable of producing acceptable segmentation results (with less than 1.5X error compared to favorable initialization) within the range of 18~22 mm of in-plane displacement and 12~14 degrees of long-axial orientation error.