A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Neural networks and the bias/variance dilemma
Neural Computation
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Shape quantization and recognition with randomized trees
Neural Computation
Machine Learning
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Boosting the margin: A new explanation for the effectiveness of voting methods
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
An efficient classifier to diagnose of schizophrenia based on the EEG signals
Expert Systems with Applications: An International Journal
A Learning Algorithm of Boosting Kernel Discriminant Analysis for Pattern Recognition
IEICE - Transactions on Information and Systems
Classification of BMD and ADHD patients using their EEG signals
Expert Systems with Applications: An International Journal
Hi-index | 0.15 |
A useful notion of weak dependence between many classifiers constructed with the same training data is introduced. It is shown that if both this weak dependence is low and the expected margins are large, then decison rules based on linear combinations of these classifiers can achieve error rates that decrease exponentially fast. Empirical results with randomized trees and trees constructed via boosting and bagging show that weak dependence is present in these type of trees. Furthermore, these results also suggest that there is a trade-off between weak dependence and expected margins, in the sense that to compensate for low expected margins, there should be low mutual dependence between the classifiers involved in the linear combination.