Greedy algorithms for classification—consistency, convergence rates, and adaptivity
The Journal of Machine Learning Research
Boosting as a Regularized Path to a Maximum Margin Classifier
The Journal of Machine Learning Research
Density estimation with stagewise optimization of the empirical risk
Machine Learning
Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
The Journal of Machine Learning Research
Multi-class cost-sensitive boosting with p-norm loss functions
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Blind Recover Conditions for Images with Side Information
IEICE - Transactions on Information and Systems
Stochastic methods for l1 regularized loss minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Weight-decay regularization in reproducing Kernel Hilbert spaces by variable-basis schemes
WSEAS Transactions on Mathematics
Bundle Methods for Regularized Risk Minimization
The Journal of Machine Learning Research
Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm
ACM Transactions on Algorithms (TALG)
Large-margin classification in infinite neural networks
Neural Computation
Estimates on weight-decay regularization by variable-basis schemes
ACS'09 Proceedings of the 9th WSEAS international conference on Applied computer science
A two-view learning approach for image tag ranking
Proceedings of the fourth ACM international conference on Web search and data mining
Stochastic Methods for l1-regularized Loss Minimization
The Journal of Machine Learning Research
Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints
SIAM Journal on Optimization
Unifying divergence minimization and statistical inference via convex duality
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Suboptimal Solutions to Team Optimization Problems with Stochastic Information Structure
SIAM Journal on Optimization
Positive semidefinite metric learning using boosting-like algorithms
The Journal of Machine Learning Research
Hi-index | 754.84 |
A greedy algorithm for a class of convex optimization problems is presented. The algorithm is motivated from function approximation using a sparse combination of basis functions as well as some of its variants. We derive a bound on the rate of approximate minimization for this algorithm, and present examples of its application. Our analysis generalizes a number of earlier studies.