Improving ultimate convergence of an augmented Lagrangian method
Optimization Methods & Software - Dedicated to Professor Michael J.D. Powell on the occasion of his 70th birthday
Active Set Identification for Linearly Constrained Minimization Without Explicit Derivatives
SIAM Journal on Optimization
Online learning in the embedded manifold of low-rank matrices
The Journal of Machine Learning Research
Manifold identification in dual averaging for regularized stochastic online learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
Techniques that identify the active constraints at a solution of a nonlinear programming problem from a point near the solution can be a useful adjunct to nonlinear programming algorithms. They have the potential to improve the local convergence behavior of these algorithms and in the best case can reduce an inequality constrained problem to an equality constrained problem with the same solution. This paper describes several techniques that do not require good Lagrange multiplier estimates for the constraints to be available a priori, but depend only on function and first derivative information. Computational tests comparing the effectiveness of these techniques on a variety of test problems are described. Many tests involve degenerate cases, in which the constraint gradients are not linearly independent and/or strict complementarity does not hold.