Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
On the Accuracy of Meta-learning for Scalable Data Mining
Journal of Intelligent Information Systems
Advances in knowledge discovery and data mining
Advances in knowledge discovery and data mining
Using output codes to boost multiclass learning problems
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
A Dynamic Integration Algorithm for an Ensemble of Classifiers
ISMIS '99 Proceedings of the 11th International Symposium on Foundations of Intelligent Systems
Advanced Dynamic Selection of Diagnostic Methods
CBMS '98 Proceedings of the Eleventh IEEE Symposium on Computer-Based Medical Systems
Data Mining using MLC++, A Machine Learning Library in C++
ICTAI '96 Proceedings of the 8th International Conference on Tools with Artificial Intelligence
An extensible meta-learning approach for scalable and accurate inductive learning
An extensible meta-learning approach for scalable and accurate inductive learning
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Decision Committee Learning with Dynamic Integration of Classifiers
ADBIS-DASFAA '00 Proceedings of the East-European Conference on Advances in Databases and Information Systems Held Jointly with International Conference on Database Systems for Advanced Applications: Current Issues in Databases and Information Systems
Hi-index | 0.00 |
In data mining, the selection of an appropriate classifier to estimate the value of an unknown attribute for a new instance has an essential impact to the quality of the classification result. Recently promising approaches using parallel and distributed computing have been presented. In this paper, we consider an approach that uses classifiers trained on a number of data subsets in parallel as in the arbiter meta-learning technique. We suggest that information is collected during the learning phase about the performance of the included base classifiers and arbiters and that this information is used during the application phase to select the best classifier dynamically. We evaluate our technique and compare it with the simple arbiter meta-learning using selected data sets from the UCI machine learning repository. The comparison results show that our dynamic meta-learning technique outperforms the arbiter metalearning significantly in some cases but further profound analysis is needed to draw general conclusions.