Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Quantifying the Resilience of Inductive Classification Algorithms
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Selective Sampling for Nearest Neighbor Classifiers
Machine Learning
Active Learning with Feedback on Features and Instances
The Journal of Machine Learning Research
Selective generation of training examples in active meta-learning
International Journal of Hybrid Intelligent Systems - HIS 2007
Experiment Databases: Towards an Improved Experimental Methodology in Machine Learning
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
Genetic-Based Synthetic Data Sets for the Analysis of Classifiers Behavior
HIS '08 Proceedings of the 2008 8th International Conference on Hybrid Intelligent Systems
UCI++: Improved Support for Algorithm Selection Using Datasetoids
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Editorial: Hybrid intelligent algorithms and applications
Information Sciences: an International Journal
Metalearning: Applications to Data Mining
Metalearning: Applications to Data Mining
Uncertainty sampling-based active selection of datasetoids for meta-learning
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Hi-index | 0.00 |
Several meta-learning approaches have been developed for the problem of algorithm selection. In this context, it is of central importance to collect a sufficient number of datasets to be used as metaexamples in order to provide reliable results. Recently, some proposals to generate datasets have addressed this issue with successful results. These proposals include datasetoids, which is a simple manipulation method to obtain new datasets from existing ones. However, the increase in the number of datasets raises another issue: in order to generate meta-examples for training, it is necessary to estimate the performance of the algorithms on the datasets. This typically requires running all candidate algorithms on all datasets, which is computationally very expensive. One approach to address this problem is the use of active learning, termed active meta-learning. In this paper we investigate the combined use of active meta-learning and datasetoids. Our results show that it is possible to significantly reduce the computational cost of generating meta-examples not only without loss of meta-learning accuracy but with potential gains.