Generalizing from case studies: a case study
ML92 Proceedings of the ninth international workshop on Machine learning
Characterizing the applicability of classification algorithms using meta-level learning
ECML-94 Proceedings of the European conference on machine learning on Machine Learning
Machine learning, neural and statistical classification
Artificial Intelligence Review - Special issue on lazy learning
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
A Comparison of Ranking Methods for Classification Algorithm Selection
ECML '00 Proceedings of the 11th European Conference on Machine Learning
Meta-Learning by Landmarking Various Learning Algorithms
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Zoomed Ranking: Selection of Classification Algorithms Based on Relevant Performance Information
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Hi-index | 0.00 |
Cross-validation (CV) is the most accurate method available for algorithm recommendation but it is rather slow. We show that information about the past performance of algorithms can be used for the same purpose with small loss in accuracy and significant savings in experimentation time. We use a meta-learning framework that combines a simple IBL algorithm with a ranking method. We show that results improve significantly by using a set of selected measures that represent data characteristics that permit to predict algorithm performance. Our results also indicate that the choice of ranking method as a smaller effect on the quality of recommendations. Finally, we present situations that illustrate the advantage of providing recommendation as a ranking of the candidate algorithms, rather than as the single algorithm which is expected to perform best.