Boosting a weak learning algorithm by majority
Information and Computation
A Comparison of New and Old Algorithms for a Mixture EstimationProblem
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Boosting in the limit: maximizing the margin of learned ensembles
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
An adaptive version of the boost by majority algorithm
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Additive models, boosting, and inference for generalized divergences
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Boosting as entropy projection
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Machine Learning
Linear Programming Boosting via Column Generation
Machine Learning
Convex Optimization
Totally corrective boosting algorithms that maximize the margin
ICML '06 Proceedings of the 23rd international conference on Machine learning
Efficient Margin Maximizing with Boosting
The Journal of Machine Learning Research
Totally corrective boosting algorithms that maximize the margin
Totally corrective boosting algorithms that maximize the margin
Linear Programming Boosting by Column and Row Generation
DS '09 Proceedings of the 12th International Conference on Discovery Science
Sparse substring pattern set discovery using linear programming boosting
DS'10 Proceedings of the 13th international conference on Discovery science
ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part I
PCA enhanced training data for adaboost
CAIP'11 Proceedings of the 14th international conference on Computer analysis of images and patterns - Volume Part I
Cloosting: clustering data with boosting
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints
SIAM Journal on Optimization
A robust and efficient doubly regularized metric learning approach
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Fully corrective boosting with arbitrary loss and regularization
Neural Networks
Hi-index | 0.01 |
In this paper we discuss boosting algorithms that maximize the soft margin of the produced linear combination of base hypotheses. LPBoost is the most straightforward boosting algorithm for doing this. It maximizes the soft margin by solving a linear programming problem. While it performs well on natural data, there are cases where the number of iterations is linear in the number of examples instead of logarithmic.By simply adding a relative entropy regularization to the linear objective of LPBoost, we arrive at the Entropy Regularized LPBoost algorithm for which we prove a logarithmic iteration bound. A previous algorithm, called SoftBoost, has the same iteration bound, but the generalization error of this algorithm often decreases slowly in early iterations. Entropy Regularized LPBoost does not suffer from this problem and has a simpler, more natural motivation.