Two “well-known” properties of subgradient optimization

  • Authors:
  • Kurt M. Anstreicher;Laurence A. Wolsey

  • Affiliations:
  • University of Iowa, Department of Management Sciences, 52242, Iowa City, IA, USA;Université Catholique de Louvain, Center for Operations Research and Econometrics, 1348, Louvain-la-Neuve, Belgium

  • Venue:
  • Mathematical Programming: Series A and B - Series B - Special Issue: Nonsmooth Optimization and Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.02

Visualization

Abstract

The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme.