Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Journal of Optimization Theory and Applications
Smooth transformation of the generalized minimax problem
Journal of Optimization Theory and Applications
Convergence to Second Order Stationary Points in Inequality Constrained Optimization
Mathematics of Operations Research
Advances in trust region algorithms for constrained optimization
HPOPT '96 Proceedings of the Stieltjes workshop on High performance optimization techniques
A Shifted-Barrier Primal-Dual Algorithm Model for Linearly ConstrainedOptimization Problems
Computational Optimization and Applications - Special issue on computational optimization—a tribute to Olvi Mangasarian, part I
Penalty/Barrier Multiplier Methods for Convex Programming Problems
SIAM Journal on Optimization
Computational Optimization and Applications
Lipschitzian Stability of Newton's Method for Variational Inclusions
Proceedings of the 19th IFIP TC7 Conference on System Modelling and Optimization: Methods, Theory and Applications
Mathematical Programming: Series A and B
A Generic Algorithm for Solving Inclusions
SIAM Journal on Optimization
Direct manipulation of free-form deformation using curve-pairs
Computer-Aided Design
Hi-index | 0.00 |
Successive approximation methods appear throughout numerical optimization, where a solution to an optimization problem is sought as the limit of solutions to a succession of simpler approximation problems. Such methods include essentially any standard penalty method, barrier method, trust region method, augmented Lagrangian method, or sequential quadratic programming (SQP) method, as well as many other methods. The approximation problems on which a successive approximation method is based typically depend on parameters, in which case the performance of the method is related to the corresponding sequence of parameters. For many successive approximation methods, the sequence of parameters might need only approach some parameter target set for the method to have nice convergence properties. Successive approximation methods could be analyzed as examples of a generic inclusion solving method from Levy (2004) because the solutions to the approximation problems satisfy necessary optimality inclusions. However, the inclusion solving method from Levy (2004) was developed for single-parameter target points. In this paper, we extend the results from Levy (2004) to allow parameter target sets and apply these results to the convergence analysis of successive approximation methods. We focus on two important convergence issues: (1) the rate of convergence of the iterates generated by a successive approximation method and (2) the validity of the limit as a solution to the original problem. An augmented Lagrangian method allowing quite general parameter updating is explored in detail to illustrate how the framework presented here can expose interesting new alternatives for numerical optimization.