Hi-index | 0.00 |
We propose a boosting algorithm that seeks to minimize the AdaBoost exponential loss of a composite classifier using only a sparse set of base classifiers. The proposed algorithm is computationally efficient and in test examples produces composite classifiers that are sparser and generalize as well those produced by Adaboost. The algorithm can be viewed as a coordinate descent method for the l1-regularized Adaboost exponential loss function.