A classifier ensemble based on performance level estimation
ISBI'09 Proceedings of the Sixth IEEE international conference on Symposium on Biomedical Imaging: From Nano to Macro
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Single-Histogram class models for image segmentation
ICVGIP'06 Proceedings of the 5th Indian conference on Computer Vision, Graphics and Image Processing
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Simplified labeling process for medical image segmentation
MICCAI'12 Proceedings of the 15th international conference on Medical Image Computing and Computer-Assisted Intervention - Volume Part II
Hi-index | 0.00 |
This paper presents an algorithm using discriminative sparse representations to segment tissues in optical images of the uterine cervix. Because of the large variations in the image appearance caused by the changing of illumination and specular reflection, the different classes of color and texture features in optical images are often overlapped with each other. Using sparse representations they can be transformed to higher dimension with sparse constraints and become more linearly separated. Different from the previous reconstructive sparse representation, the discriminative method considers positive and negative samples simultaneously, which means that these generated dictionaries can be discriminative and perform better for their own classes but worse for the others. New data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we used our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of the uterine cervix. Compared with the other general methods including SVM, nearest neighbor and reconstructive sparse representations, our approach showed higher sensitivity and specificity.