Smooth Optimization with Approximate Gradient

  • Authors:
  • Alexandre d'Aspremont

  • Affiliations:
  • aspremon@princeton.edu

  • Venue:
  • SIAM Journal on Optimization
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is computed only up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.