Machine Learning
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Clustering and Artificial Neural Networks as a Tool to Generate Membership Functions
CONIELECOMP '06 Proceedings of the 16th International Conference on Electronics, Communications and Computers
Introducing a very large dataset of handwritten Farsi digits and a study on their varieties
Pattern Recognition Letters
Vote-Based Classifier Selection for Biomedical NER Using Genetic Algorithms
IbPRIA '07 Proceedings of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part II
Genetic algorithm based selective neural network ensemble
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
A heuristically perturbation of dataset to achieve a diverse ensemble of classifiers
MCPR'12 Proceedings of the 4th Mexican conference on Pattern Recognition
Unsupervised linkage learner based on local optimums
MCPR'12 Proceedings of the 4th Mexican conference on Pattern Recognition
A heuristic diversity production approach
ICCSA'12 Proceedings of the 12th international conference on Computational Science and Its Applications - Volume Part III
A clustering ensemble based on a modified normalized mutual information metric
AMT'12 Proceedings of the 8th international conference on Active Media Technology
Hi-index | 0.05 |
While there are many methods in classifier ensemble, there is not any method which uses weighting in class level. Random Forest which uses decision trees for problem solving is the base of our proposed ensemble. In this work, we propose a weightening based classifier ensemble method in class level. The proposed method is like Random Forest method in employing decision tree and neural networks as classifiers, and differs from Random Forest in employing a weight vector per classifier. For evaluating the proposed weighting method, both ensemble of decision tree and neural networks classifiers are applied in experimental results. Main presumption of this method is that the reliability of the predictions of each classifier differs among classes. The proposed ensemble methods were tested on a huge Persian data set of handwritten digits and have improvements in comparison with competitors.