Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Quantifying the Resilience of Inductive Classification Algorithms
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Selective Sampling for Nearest Neighbor Classifiers
Machine Learning
Selective generation of training examples in active meta-learning
International Journal of Hybrid Intelligent Systems - HIS 2007
Genetic-Based Synthetic Data Sets for the Analysis of Classifiers Behavior
HIS '08 Proceedings of the 2008 8th International Conference on Hybrid Intelligent Systems
Cross-disciplinary perspectives on meta-learning for algorithm selection
ACM Computing Surveys (CSUR)
UCI++: Improved Support for Algorithm Selection Using Datasetoids
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Metalearning: Applications to Data Mining
Metalearning: Applications to Data Mining
Combining meta-learning and active selection of datasetoids for algorithm selection
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part I
Hi-index | 0.00 |
Several meta-learning approaches have been developed for the problem of algorithm selection. In this context, it is of central importance to collect a sufficient number of datasets to be used as meta-examples in order to provide reliable results. Recently, some proposals to generate datasets have addressed this issue with successful results. These proposals include datasetoids, which is a simple manipulation method to obtain new datasets from existing ones. However, the increase in the number of datasets raises another issue: in order to generate meta-examples for training, it is necessary to estimate the performance of the algorithms on the datasets. This typically requires running all candidate algorithms on all datasets, which is computationally very expensive. One approach to address this problem is the use of an active learning approach to meta-learning, termed active meta-learning. In this paper we investigate the combined use of an active meta-learning approach based on an uncertainty score and datasetoids. Based on our results, we conclude that the accuracy of our method is very good results with as little as 10% to 20% of the meta-examples labeled.