A quasi-Newton approach to non-smooth convex optimization

  • Authors:
  • Jin Yu;S. V. N. Vishwanathan;Simon Günter;Nicol N. Schraudolph

  • Affiliations:
  • NICTA, Canberra, Australia and Australian National University, Canberra, Australia;NICTA, Canberra, Australia and Australian National University, Canberra, Australia;NICTA, Canberra, Australia and Australian National University, Canberra, Australia;NICTA, Canberra, Australia and Australian National University, Canberra, Australia

  • Venue:
  • Proceedings of the 25th international conference on Machine learning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We extend the well-known BFGS quasi-Newton method and its limited-memory variant LBFGS to the optimization of non-smooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-regularized risk minimization with binary hinge loss, and its direction-finding component to L1-regularized risk minimization with logistic loss. In both settings our generic algorithms perform comparable to or better than their counterparts in specialized state-of-the-art solvers.