Globally convergent variable metric method for convex nonsmooth unconstrained minimization
Journal of Optimization Theory and Applications
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Scalable training of L1-regularized log-linear models
Proceedings of the 24th international conference on Machine learning
A scalable modular convex solver for regularized risk minimization
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Exponential family sparse coding with applications to self-taught learning
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Bundle Methods for Regularized Risk Minimization
The Journal of Machine Learning Research
A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
The Journal of Machine Learning Research
A fast quasi-Newton method for semi-supervised SVM
Pattern Recognition
Hi-index | 0.00 |
We extend the well-known BFGS quasi-Newton method and its limited-memory variant LBFGS to the optimization of non-smooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-regularized risk minimization with binary hinge loss, and its direction-finding component to L1-regularized risk minimization with logistic loss. In both settings our generic algorithms perform comparable to or better than their counterparts in specialized state-of-the-art solvers.