Computing Optimal Hypotheses Efficiently for Boosting
Progress in Discovery Science, Final Report of the Japanese Discovery Science Project
Real Adaboost Ensembles with Emphasized Subsampling
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
The beneficial effects of using multi-net systems that focus on hard patterns
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Learnability, Stability and Uniform Convergence
The Journal of Machine Learning Research
Dynamic construction of multilayer neural networks for classification
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part I
Hi-index | 0.00 |
The problem of improving the accuracy of a hypothesis output by a learning algorithm in the distribution-free learning model is considered. A concept class is learnable (or strongly learnable) if, given access to a source of examples from the unknown concept, the learner with high probability is able to output a hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce a hypothesis that forms only slightly better than random guessing. It is shown that these two notions of learnability are equivalent. An explicit method is described for directly converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences.