Polynomial learnability of probabilistic concepts with respect to the Kullback-Leibler divergence
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved boosting algorithms using confidence-rated predictions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Boosting the margin: A new explanation for the effectiveness of voting methods
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Data Mining using MLC++, A Machine Learning Library in C++
ICTAI '96 Proceedings of the 8th International Conference on Tools with Artificial Intelligence
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
On Equivalence Relationships Between Classification and Ranking Algorithms
The Journal of Machine Learning Research
Hi-index | 0.00 |
AdaBoost is a popular and effective leveraging procedure for improving the hypotheses generated by weak learning algorithms. AdaBoost and many other leveraging algorithms can be viewed as performing a constrained gradient descent over a potential function. At each iteration the distribution over the sample given to the weak learner is the direction of steepest descent. We introduce a new leveraging algorithm based on a natural potential function. For this potential function, the direction of steepest descent can have negative components. Therefore we provide two transformations for obtaining suitable distributions from these directions of steepest descent. The resulting algorithms have bounds that are incomparable to AdaBoost's, and their empirical performance is similar to AdaBoost's.