Information-Based Evaluation Criterion for Classifier's Performance
Machine Learning
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Meta-Learning by Landmarking Various Learning Algorithms
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
AST: Support for Algorithm Selection with a CBR Approach
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
Improved Dataset Characterisation for Meta-learning
DS '02 Proceedings of the 5th International Conference on Discovery Science
An Empirical Performance Comparison of Machine Learning Methods for Spam E-Mail Categorization
HIS '04 Proceedings of the Fourth International Conference on Hybrid Intelligent Systems
Cross-disciplinary perspectives on meta-learning for algorithm selection
ACM Computing Surveys (CSUR)
Metalearning: Applications to Data Mining
Metalearning: Applications to Data Mining
On learning algorithm selection for classification
Applied Soft Computing
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Hi-index | 0.00 |
We are working on the problem of developing a flexible, generic metal earning process that supports algorithm selection based on studying the algorithms' past performance behaviors. State of the art machine learning systems display limitations in that they require a great deal of human supervision to select an effective algorithm with corresponding options for a specific domain. Additionally, very little guidance is available for algorithm-parameter selection and the number of available choices is overwhelming. In this paper, we develop a flexible, large-scale experimental framework for a metacontroller that supports explorations through algorithm-parameter space and recommend algorithm for a given dataset. First, we aim to facilitate an easy to use process to create a search space for algorithm selection by automatically exploring some possible combinations of algorithms and key parameters. Secondly, our goal is to come up with an algorithm recommendation by looking at the past behaviors of related datasets. Our main contribution is the implemented framework itself which is based on the use of a wide variety of strategies to automatically generate a search space and recommend algorithms for a specific dataset. We evaluate our system with 40 major algorithms on 20 datasets from the UCI repository. Each dataset is represented by 25 data characteristics. We generate and run 7510 combinations of algorithm, parameters and datasets. Our experiments show that our framework offers a friendly way of setting up a machine learning experiment while providing accurate ranking of recommended algorithms based on past behaviors. Specifically, 88% of recommended algorithm rankings significantly correlated with the true rankings for a given dataset.