Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Numerical stability and efficiency of penalty algorithms
SIAM Journal on Numerical Analysis
Perturbation lemma for the Newton method with application to the SQP Newton method
Journal of Optimization Theory and Applications
Exact Penalization of Mathematical Programs with Equilibrium Constraints
SIAM Journal on Control and Optimization
SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
SIAM Journal on Optimization
Infeasibility and negative curvature in optimization
Infeasibility and negative curvature in optimization
CUTEr and SifDec: A constrained and unconstrained testing environment, revisited
ACM Transactions on Mathematical Software (TOMS)
Mathematical Programming: Series A and B
SIAM Journal on Optimization
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Interior Methods for Mathematical Programs with Complementarity Constraints
SIAM Journal on Optimization
Mathematical Programming: Series A and B
Steering exact penalty methods for nonlinear programming
Optimization Methods & Software - Dedicated to Professor Michael J.D. Powell on the occasion of his 70th birthday
Handling infeasibility in a large-scale nonlinear optimization algorithm
Numerical Algorithms
Lagrangian Duality and Branch-and-Bound Algorithms for Optimal Power Flow
Operations Research
A hybrid genetic algorithm for parameter identification of bioprocess models
LSSC'11 Proceedings of the 8th international conference on Large-Scale Scientific Computing
Hi-index | 0.00 |
This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees regardless of whether a problem is feasible or infeasible. We present a sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter automatically, when appropriate, to emphasize feasibility over optimality. The superlinear convergence of such an algorithm to an optimal solution is well known when a problem is feasible. The main contribution of this paper, however, is a set of conditions under which the superlinear convergence of the same type of algorithm to an infeasible stationary point can be guaranteed when a problem is infeasible. Numerical experiments illustrate the practical behavior of the method on feasible and infeasible problems.