Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Nonlinear programming and nonsmooth optimization by successive linear programming
Mathematical Programming: Series A and B
SIAM Journal on Numerical Analysis
A trust region algorithm for equality constrained optimization
Mathematical Programming: Series A and B
Mathematical Programs with Complementarity Constraints: Stationarity, Optimality, and Sensitivity
Mathematics of Operations Research
SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
SIAM Journal on Optimization
SIAM Journal on Optimization
An Interior Point Algorithm for Large-Scale Nonlinear Programming
SIAM Journal on Optimization
On the Implementation of an Algorithm for Large-Scale Equality Constrained Optimization
SIAM Journal on Optimization
Mathematical Programming: Series A and B
SIAM Journal on Optimization
On the Convergence of Successive Linear-Quadratic Programming Algorithms
SIAM Journal on Optimization
Mathematical Programming: Series A and B
An interior algorithm for nonlinear optimization that combines line search and trust region steps
Mathematical Programming: Series A and B
Interior Methods for Mathematical Programs with Complementarity Constraints
SIAM Journal on Optimization
Mathematical Programming: Series A and B
A Matrix-Free Algorithm for Equality Constrained Optimization Problems with Rank-Deficient Jacobians
SIAM Journal on Optimization
A Second Derivative SQP Method: Global Convergence
SIAM Journal on Optimization
A Second Derivative SQP Method: Local Convergence and Practical Issues
SIAM Journal on Optimization
Multilevel Algorithms for Large-Scale Interior Point Methods
SIAM Journal on Scientific Computing
Infeasibility Detection and SQP Methods for Nonlinear Optimization
SIAM Journal on Optimization
Derivative-free nonlinear optimization filter simplex
International Journal of Applied Mathematics and Computer Science
Hi-index | 0.00 |
This paper reviews, extends and analyses a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. In contrast with classical approaches, the choice of the penalty parameter ceases to be a heuristic and is determined, instead, by a subproblem with clearly defined objectives. The new penalty update strategy is presented in the context of sequential quadratic programming and sequential linear-quadratic programming methods that use trust regions to promote convergence. The paper concludes with a discussion of penalty parameters for merit functions used in line search methods.