Communications of the ACM
Learning in the presence of malicious errors
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
The Strength of Weak Learnability
Machine Learning
Robust trainability of single neurons
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
C4.5: programs for machine learning
C4.5: programs for machine learning
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
An introduction to computational learning theory
An introduction to computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
On the boosting ability of top-down decision tree learning algorithms
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved boosting algorithms using confidence-rated predictions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
On the Power of Decision Lists
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
An introduction to boosting and leveraging
Advanced lectures on machine learning
On domain-partitioning induction criteria: worst-case bounds for the worst-case based
Theoretical Computer Science
Hi-index | 0.00 |
We describe a new Boosting algorithm which combines the base hypotheses with symmetric functions. Among its properties of practical relevance, the algorithm has significant resistance against noise, and is efficient even in an agnostic learning setting. This last property is ruled out for voting-based Boosting algorithms like ADABOOST. Experiments carried out on thirty domains, most of which readily available, tend to display the reliability of the classifiers built.