Estimation of Classifier Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Validity Measure for Fuzzy Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Similarity measure between fuzzy sets and between elements
Fuzzy Sets and Systems
The influence of prior knowledge on the expected performance of a classifier
Pattern Recognition Letters
An introduction to ROC analysis
Pattern Recognition Letters - Special issue: ROC analysis in pattern recognition
Classifier evaluation under limited resources
Pattern Recognition Letters
Will the real iris data please stand up?
IEEE Transactions on Fuzzy Systems
Hi-index | 0.00 |
There is presently no unified methodology that allows the evaluation of supervised and non-supervised classification algorithms. Supervised problems are evaluated through Quality Functions that require a previously known solution for the problem, while non-supervised problems are evaluated through several Structural Indexes that do not evaluate the classification algorithm by using the same pattern similarity criteria embedded in the classification algorithm. In both cases, a lot of useful information remains hidden or is not considered by the evaluation method, such as the quality of the supervision sample or the structural change generated by the classification algorithm on the sample. This paper proposes a unified methodology to evaluate classification problems of both kinds, that offers the possibility of making comparative evaluations and yields a larger amount of information to the evaluator about the quality of the initial sample, when it exists, and regarding the change produced by the classification algorithm.