A truncated Newton method with nonmonotone line search for unconstrained optimization
Journal of Optimization Theory and Applications
Exact penalty functions in constrained optimization
SIAM Journal on Control and Optimization
Newton methods for large-scale linear equality-constrained minimization
SIAM Journal on Matrix Analysis and Applications
A nonsmooth version of Newton's method
Mathematical Programming: Series A and B
Journal of Optimization Theory and Applications
Lancelot: A FORTRAN Package for Large-Scale Nonlinear Optimization (Release A)
Lancelot: A FORTRAN Package for Large-Scale Nonlinear Optimization (Release A)
Newton Methods For Large-Scale Linear Inequality-Constrained Minimization
SIAM Journal on Optimization
A Trust Region Interior Point Algorithm for Linearly Constrained Optimization
SIAM Journal on Optimization
A Mathematical Programming Approach for the Solution of the Railway Yield Management Problem
Transportation Science
Minimization of SC1 functions and the Maratos effect
Operations Research Letters
Convex programming with single separable constraint and bounded variables
Computational Optimization and Applications
Convergence of Successive Approximation Methods with Parameter Target Sets
Mathematics of Operations Research
Hi-index | 0.00 |
In this paper we describe a Newton-type algorithm model for solving smooth constrained optimizationproblems with nonlinear objective function, general linear constraintsand bounded variables. The algorithm model is based on the definition of a continuously differentiable exact merit function that follows an exact penaltyapproach for the box constraints and an exact augmented Lagrangianapproach for the general linear constraints. Under very mildassumptions and without requiring the strict complementarityassumption, the algorithm model produces a sequence of pairs{x^k, λ^k} converging quadratically to a pair ({\bar{x},\bar\lambda}) where {\bar{x} satisfies the first order necessaryconditions and {\bar\lambda} is a KKT multipliers vector associated to the linear constraints. As regards the behaviour of the sequence {x^k}alone, it is guaranteed that it converges at least superlinearly. At each iteration, the algorithm requires only the solution of alinear system that can be performed by means of conjugate gradient methods. Numerical experiments and comparison are reported.