The Strength of Weak Learnability
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
General bounds on statistical query learning and PAC learning with noise via hypothesis boosting
Information and Computation
On the boosting ability of top-down decision tree learning algorithms
Journal of Computer and System Sciences
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Mining high-speed data streams
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Machine Learning
Rough Sets and Data Mining: Analysis of Imprecise Data
Rough Sets and Data Mining: Analysis of Imprecise Data
Machine Learning
Neural Computation
Rough Set Based Data Exploration Using ROSE System
ISMIS '99 Proceedings of the 11th International Symposium on Foundations of Intelligent Systems
ROSE - Software Implementation of the Rough Set Theory
RSCTC '98 Proceedings of the First International Conference on Rough Sets and Current Trends in Computing
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Layered learning in multiagent systems
Layered learning in multiagent systems
ICML '04 Proceedings of the twenty-first international conference on Machine learning
A combined neural network and decision trees model for prognosis of breast cancer relapse
Artificial Intelligence in Medicine
Studying the hybridization of artificial neural networks in HECIC
IWANN'11 Proceedings of the 11th international conference on Artificial neural networks conference on Advances in computational intelligence - Volume Part II
Hi-index | 0.00 |
An active research area in Machine Learning is the construction of multiple classifier systems to increase learning accuracy of simple classifiers. In this paper we present a method to improve even more the accuracy: ML-CIDIM. This method has been developed by using a multiple classifier system which basic classifier is CIDIM, an algorithm that induces small and accurate decision trees. CIDIM makes a random division of the training set into two subsets and uses them to build an internal bound condition. ML-CIDIM induces some multiple classifier systems based on CIDIM and places them in different layers, trying to improve the accuracy of the previous layer with the following one. In this way, the accuracy obtained thanks to a unique multiple classifier system based on CIDIM can be improved. In reference to the accuracy of the classifier system built with ML-CIDIM, we can say that it competes well against bagging and boosting at statistically significant confidence levels.