ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Inference for the Generalization Error
Machine Learning
Estimating replicability of classifier learning experiments
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Ordering and Finding the Best of K2 Supervised Learning Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
A lazy bagging approach to classification
Pattern Recognition
Information Sciences: an International Journal
Regularizing multiple kernel learning using response surface methodology
Pattern Recognition
Design and Analysis of Classifier Learning Experiments in Bioinformatics: Survey and Case Studies
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Hi-index | 0.01 |
In the literature, there exist statistical tests to compare supervised learning algorithms on multiple data sets in terms of accuracy but they do not always generate an ordering. We propose Multi^2Test, a generalization of our previous work, for ordering multiple learning algorithms on multiple data sets from ''best'' to ''worst'' where our goodness measure is composed of a prior cost term additional to generalization error. Our simulations show that Multi^2Test generates orderings using pairwise tests on error and different types of cost using time and space complexity of the learning algorithms.