Neural networks and the bias/variance dilemma
Neural Computation
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
On combining classifiers using sum and product rules
Pattern Recognition Letters
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
The ``Test and Select'' Approach to Ensemble Combination
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Experiments with Classifier Combining Rules
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Rotation Forest: A New Classifier Ensemble Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
Using diversity of errors for selecting members of a committee classifier
Pattern Recognition
Advances of Research in Fuzzy Integral for Classifiers' fusion
SNPD '07 Proceedings of the Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing - Volume 02
Engineering multiversion neural-net systems
Neural Computation
Creating ensembles of classifiers via fuzzy clustering and deflection
Fuzzy Sets and Systems
Dynamic and static weighting in classifier fusion
IbPRIA'05 Proceedings of the Second Iberian conference on Pattern Recognition and Image Analysis - Volume Part II
Hi-index | 0.00 |
This article presents a new method to construct multiple classifier system by making diverse base classifiers using weight tuning. In the method presented, base classifiers are multilayer perceptions which creates diverse base classifiers using a three-step procedure. In the first step, base classifiers are trained for acceptable accuracy. In the second step, a weight tuning process tunes their weights such that each one can distinguish one class of input data from the others with highest possible accuracy. An evolutionary method is used to optimize efficiency of each base classifier to distinguish one class of input data in this step. In the third step, a new method combines the results of the base classifiers. As diversity is measured and monitored throughout the entire procedure, it is measured using a confusion matrix. Superiority of the proposed method is discussed using several known classifier fusion methods and known benchmark datasets.