"GrabCut": interactive foreground extraction using iterated graph cuts
ACM SIGGRAPH 2004 Papers
Segmentation of SBFSEM Volume Data of Neural Tissue by Hierarchical Classification
Proceedings of the 30th DAGM symposium on Pattern Recognition
Automatic joint classification and segmentation of whole cell 3D images
Pattern Recognition
Graph cuts framework for kidney segmentation with prior shape constraints
MICCAI'07 Proceedings of the 10th international conference on Medical image computing and computer-assisted intervention - Volume Part I
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Learning to combine bottom-up and top-down segmentation
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
Learning class-specific edges for object detection and segmentation
ICVGIP'06 Proceedings of the 5th Indian conference on Computer Vision, Graphics and Image Processing
A hybrid approach for Pap-Smear cell nucleus extraction
MCPR'11 Proceedings of the Third Mexican conference on Pattern recognition
Neural process reconstruction from sparse user scribbles
MICCAI'11 Proceedings of the 14th international conference on Medical image computing and computer-assisted intervention - Volume Part I
MLMI'11 Proceedings of the Second international conference on Machine learning in medical imaging
Anisotropic ssTEM image segmentation using dense correspondence across sections
MICCAI'12 Proceedings of the 15th international conference on Medical Image Computing and Computer-Assisted Intervention - Volume Part I
Hi-index | 0.00 |
While there has been substantial progress in segmenting natural images, state-of-the-art methods that perform well in such tasks unfortunately tend to underperform when confronted with the different challenges posed by electron microscope (EM) data. For example, in EM imagery of neural tissue, numerous cells and subcellular structures appear within a single image, they exhibit irregular shapes that cannot be easily modeled by standard techniques, and confusing textures clutter the background. We propose a fully automated approach that handles these challenges by using sophisticated cues that capture global shape and texture information, and by learning the specific appearance of object boundaries. We demonstrate that our approach significantly outperforms state-of-the-art techniques and closely matches the performance of human annotators.