Improving ultimate convergence of an augmented Lagrangian method
Optimization Methods & Software - Dedicated to Professor Michael J.D. Powell on the occasion of his 70th birthday
Computational Optimization and Applications
A Truncated SQP Method Based on Inexact Interior-Point Solutions of Subproblems
SIAM Journal on Optimization
A reduced Hessian SQP method for inequality constrained optimization
Computational Optimization and Applications
Sharp Primal Superlinear Convergence Results for Some Newtonian Methods for Constrained Optimization
SIAM Journal on Optimization
An inexact restoration strategy for the globalization of the sSQP method
Computational Optimization and Applications
Hi-index | 0.00 |
Most local convergence analyses of the sequential quadratic programming (SQP) algorithm for nonlinear programming make strong assumptions about the solution, namely, that the active constraint gradients are linearly independent and that there are no weakly active constraints. In this paper, we establish a framework for variants of SQP that retain the characteristic superlinear convergence rate even when these assumptions are relaxed, proving general convergence results and placing some recently proposed SQP variants in this framework. We discuss the reasons for which implementations of SQP often continue to exhibit good local convergence behavior even when the assumptions commonly made in the analysis are violated. Finally, we describe a new algorithm that formalizes and extends standard SQP implementation techniques, and we prove convergence results for this method also.