Instance-Based Learning Algorithms
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Self-organizing maps
General bounds on statistical query learning and PAC learning with noise via hypothesis boosting
Information and Computation
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Computation
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Machine Learning and Data Mining: Introduction to Principles and Algorithms
Machine Learning and Data Mining: Introduction to Principles and Algorithms
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Hybridizing Ensemble Classifiers with Individual Classifiers
ISDA '09 Proceedings of the 2009 Ninth International Conference on Intelligent Systems Design and Applications
Modelling multiple-classifier relationships using Bayesian belief networks
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
Estimating continuous distributions in Bayesian classifiers
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
ML-CIDIM: multiple layers of multiple classifier systems based on CIDIM
RSFDGrC'05 Proceedings of the 10th international conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing - Volume Part II
Hi-index | 0.00 |
One of the most relevant tasks concerning Machine Learning is the induction of classifiers, which can be used to classify or to predict. Those classifiers can be used in an isolated way, or can be combined to build a multiple classifier system. Building many-layered systems or knowing relation between different base classifiers are of special interest. Thus, in this paper we will use the HECIC system which consists of two layers: the first layer is a multiple classifier system that processes all the examples and tries to classify them; the second layer is an individual classifier that learns using the examples that are not unanimously classified by the first layer (incorporating new information). While using this system in a previous work we detected that some combinations that hybridize artificial neural networks (ANN) in one of the two layers seemed to get high-accuracy results. Thus, in this paper we have focused on the study of the improvement achieved by using different kinds of ANN in this two-layered system.