Convergence of Successive Approximation Methods with Parameter Target Sets

  • Authors:
  • Adam B. Levy

  • Affiliations:
  • Department of Mathematics, Bowdoin College, Brunswick, Maine 04011

  • Venue:
  • Mathematics of Operations Research
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Successive approximation methods appear throughout numerical optimization, where a solution to an optimization problem is sought as the limit of solutions to a succession of simpler approximation problems. Such methods include essentially any standard penalty method, barrier method, trust region method, augmented Lagrangian method, or sequential quadratic programming (SQP) method, as well as many other methods. The approximation problems on which a successive approximation method is based typically depend on parameters, in which case the performance of the method is related to the corresponding sequence of parameters. For many successive approximation methods, the sequence of parameters might need only approach some parameter target set for the method to have nice convergence properties. Successive approximation methods could be analyzed as examples of a generic inclusion solving method from Levy (2004) because the solutions to the approximation problems satisfy necessary optimality inclusions. However, the inclusion solving method from Levy (2004) was developed for single-parameter target points. In this paper, we extend the results from Levy (2004) to allow parameter target sets and apply these results to the convergence analysis of successive approximation methods. We focus on two important convergence issues: (1) the rate of convergence of the iterates generated by a successive approximation method and (2) the validity of the limit as a solution to the original problem. An augmented Lagrangian method allowing quite general parameter updating is explored in detail to illustrate how the framework presented here can expose interesting new alternatives for numerical optimization.