Extended decision template presentation for combining classifiers
Expert Systems with Applications: An International Journal
Data weighing mechanisms for clustering ensembles
Computers and Electrical Engineering
Combining classifiers using nearest decision prototypes
Applied Soft Computing
Effects of resampling method and adaptation on clustering ensemble efficacy
Artificial Intelligence Review
Hi-index | 0.00 |
In this paper, a new method for improving the performance of combinational classifier systems is proposed. The main idea behind this method is heuristic retraining of artificial neural network (ANN). In combinational classifier systems, whatever the more diversity in results of base classifiers, the better final result will obtained. The new presented method for creating this diversity is called, heuristic retraining. First, an MLP as a base classifier is trained. Then regarding errors of this base classifier, other MLPs are trained heuristically. Because our main concentration is on error-prone data, different classifiers are trained according to the amount of concentration on those data. Finally, the outputs of these retrained MLPs are combined. Although the accuracy of these classifiers is almost similar, because of their different amount of concentration on erroneous data, their outputs have a little correlation. Experimental results show the valuable improvement on two standard datasets, iris and wine.