The Strength of Weak Learnability
Machine Learning
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Classifier Combinations: Implementations and Theoretical Issues
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Selecting Diverse Members of Neural Network Ensembles
SBRN '00 Proceedings of the VI Brazilian Symposium on Neural Networks (SBRN'00)
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Constructing diverse classifier ensembles using artificial training examples
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Genetic algorithm based selective neural network ensemble
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
A new clustering algorithm with the convergence proof
KES'11 Proceedings of the 15th international conference on Knowledge-based and intelligent information and engineering systems - Volume Part I
Evolutionary ensembles with negative correlation learning
IEEE Transactions on Evolutionary Computation
Hi-index | 0.00 |
One of crucial issue in the design of combinational classifier systems is to keep diversity in the results of classifiers to reach the appropriate final result. It's obvious that the more diverse the results of the classifiers, the more suitable final result. In this paper a new approach for generating diversity during creation of an ensemble together with a new combining classifier system is proposed. The main idea in this novel system is heuristic retraining of some base classifiers. At first, a basic classifier is run, after that, regards to the drawbacks of this classifier, other base classifiers are retrained heuristically. Each of these classifiers looks at the data with its own attitude. The main attempts in the retrained classifiers are to leverage the error-prone data. The retrained classifiers usually have different votes about the sample points which are close to boundaries and may be likely erroneous. Like all ensemble learning approaches, our ensemble meta-learner approach can be developed based on any base classifiers. The main contributions are to keep some advantages of these classifiers and resolve some of their drawbacks, and consequently to enhance the performance of classification. This study investigates how by focusing on some crucial data points the performance of any base classifier can be reinforced. The paper also proves that adding the number of all "difficult" data points just as boosting method does, does not always make a better training set. Experiments show significant improvements in terms of accuracies of consensus classification. The performance of the proposed algorithm outperforms some of the best methods in the literature. Finally, the authors according to experimental results claim that forcing crucial data points to the training set as well as eliminating them from the training set can lead to the more accurate results, conditionally.