Neural networks and the bias/variance dilemma
Neural Computation
Digital modulation and coding
IEEE Transactions on Pattern Analysis and Machine Intelligence
Lower Bounds for Bayes Error Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
A 'No Panacea Theorem' for Multiple Classifier Combination
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 02
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
Information fusion for computer security: State of the art and open issues
Information Fusion
Specializing for predicting obesity and its co-morbidities
Journal of Biomedical Informatics
The use of artificial-intelligence-based ensembles for intrusion detection: a review
Applied Computational Intelligence and Soft Computing
Hi-index | 0.01 |
We introduce the 'No Panacea Theorem' (NPT) for multiple classifier combination, previously proved only in the case of two classifiers and two classes. In this paper, we extend the NPT to cases of multiple classifiers and multiple classes. We prove that if the combination function is continuous and diverse, there exists a situation in which the combination algorithm will give very bad performance. The proof relies on constructing 'pathological' probability density distributions that have high densities in particular areas such that the combination functions give incorrect classification. Thus, there is no optimal combination algorithm that is suitable in all situations. It can be seen from this theorem that the probability density functions (pdfs) play an important role in the performance of combination algorithms, so studying the pdfs becomes the first step of finding a good combination algorithm. Although devised for classifier combination, the NPT is also relevant to all supervised classification problems.