On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval
ECML '98 Proceedings of the 10th European Conference on Machine Learning
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Information combination operators for data fusion: a comparative review with classification
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Fuzzy posterior-probabilistic fusion
Pattern Recognition
A generalization of majority voting scheme for medical image detectors
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part II
Generalized weighted majority voting with an application to algorithms having spatial output
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
Using an ensemble system to improve concept extraction from clinical records
Journal of Biomedical Informatics
Hi-index | 0.10 |
In classifier combination, the relative values of a posteriori probabilities assigned to different hypotheses are more important than the accuracy of their estimates. Because of this, the independence requirement in naive Bayesian fusion should be examined from combined accuracy point of view. In this study, it is investigated whether there is a set of dependent classifiers which provides a better combined accuracy than independent classifiers when naive Bayesian fusion is used. For this purpose, two classes and three classifiers case is initially considered where the pattern classes are not equally probable. Taking into account the increased complexity in formulations, equal a priori probabilities are considered in the general case where N classes and K classifiers are used. The analysis carried out has shown that the combination of dependent classifiers using naive Bayesian fusion may provide much better combined accuracies compared to independent classifiers.