Boosting through optimization of margin distributions
IEEE Transactions on Neural Networks
Asymmetric totally-corrective boosting for real-time object detection
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part I
Totally-corrective multi-class boosting
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part IV
Heterogeneous ensemble for feature drifts in data streams
PAKDD'12 Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
Positive semidefinite metric learning using boosting-like algorithms
The Journal of Machine Learning Research
New adaboost algorithm based on interval-valued fuzzy sets
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Learning with infinitely many features
Machine Learning
Fast training of effective multi-class boosting using coordinate descent optimization
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Integrated Fisher linear discriminants: An empirical study
Pattern Recognition
Fully corrective boosting with arbitrary loss and regularization
Neural Networks
A biased selection strategy for information recycling in Boosting cascade visual-object detectors
Pattern Recognition Letters
Hi-index | 0.15 |
We study boosting algorithms from a new perspective. We show that the Lagrange dual problems of \ell_1-norm-regularized AdaBoost, LogitBoost, and soft-margin LPBoost with generalized hinge loss are all entropy maximization problems. By looking at the dual problems of these boosting algorithms, we show that the success of boosting algorithms can be understood in terms of maintaining a better margin distribution by maximizing margins and at the same time controlling the margin variance. We also theoretically prove that approximately, \ell_1-norm-regularized AdaBoost maximizes the average margin, instead of the minimum margin. The duality formulation also enables us to develop column-generation-based optimization algorithms, which are totally corrective. We show that they exhibit almost identical classification results to that of standard stagewise additive boosting algorithms but with much faster convergence rates. Therefore, fewer weak classifiers are needed to build the ensemble using our proposed optimization technique.