Active shape models—their training and application
Computer Vision and Image Understanding
Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multiband Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Physics-Based Deformable Models: Applications to Computer Vision, Graphics, and Medical Imaging
Physics-Based Deformable Models: Applications to Computer Vision, Graphics, and Medical Imaging
Geodesic Active Regions and Level Set Methods for Supervised Texture Segmentation
International Journal of Computer Vision
A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model
International Journal of Computer Vision
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Hi-index | 0.00 |
In this paper, we introduce an adaptive model-based segmentation framework, in which edge and region information are integrated and used adaptively while a solid model deforms toward the object boundary. Our 3D segmentation method stems from Metamorphs deformable models [1]. The main novelty of our work is in that, instead of performing segmentation in an entire 3D volume, we propose model-based segmentation in an adaptively changing subvolume of interest. The subvolume is determined based on appearance statistics of the evolving object model, and within the subvolume, more accurate and object-specific edge and region information can be obtained. This local and adaptive scheme for computing edges and object region information makes our segmentation solution more efficient and more robust to image noise, artifacts and intensity inhomogeneity. External forces for model deformation are derived in a variational framework that consists of both edge-based and region-based energy terms, taking into account the adaptively changing environment. We demonstrate the performance of our method through extensive experiments using cardiac MR and liver CT images.