Global convergence of a class of trust region algorithms for optimization with simple bounds
SIAM Journal on Numerical Analysis
SIAM Journal on Numerical Analysis
Mathematical Programming: Series A and B
A limited memory algorithm for bound constrained optimization
SIAM Journal on Scientific Computing
Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization
ACM Transactions on Mathematical Software (TOMS)
Trust-region methods
Lancelot: A FORTRAN Package for Large-Scale Nonlinear Optimization (Release A)
Lancelot: A FORTRAN Package for Large-Scale Nonlinear Optimization (Release A)
On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods
SIAM Journal on Optimization
Large-Scale Active-Set Box-Constrained Optimization Method with Spectral Projected Gradients
Computational Optimization and Applications
CUTEr and SifDec: A constrained and unconstrained testing environment, revisited
ACM Transactions on Mathematical Software (TOMS)
Mathematical Programming: Series A and B
A New Active Set Algorithm for Box Constrained Optimization
SIAM Journal on Optimization
Augmented Lagrangian methods under the constant positive linear dependence constraint qualification
Mathematical Programming: Series A and B
On Augmented Lagrangian Methods with General Lower-Level Constraints
SIAM Journal on Optimization
A New Sequential Optimality Condition for Constrained Optimization and Algorithmic Consequences
SIAM Journal on Optimization
A genetic algorithm based augmented Lagrangian method for constrained optimization
Computational Optimization and Applications
An artificial fish swarm algorithm based hyperbolic augmented Lagrangian method
Journal of Computational and Applied Mathematics
Journal of Global Optimization
Hi-index | 0.00 |
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.