Trust region algorithms for optimization with nonlinear equality and inequality constraints
Trust region algorithms for optimization with nonlinear equality and inequality constraints
A trust region algorithm for equality constrained optimization
Mathematical Programming: Series A and B
Trust-region methods
Sequential estimation techniques for quasi-newton algorithms.
Sequential estimation techniques for quasi-newton algorithms.
GALAHAD, a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization
ACM Transactions on Mathematical Software (TOMS)
Convex Optimization
Cubic regularization of Newton method and its global performance
Mathematical Programming: Series A and B
Affine conjugate adaptive Newton methods for nonlinear elastomechanics
Optimization Methods & Software
Modified Gauss-Newton scheme with worst case guarantees for global performance
Optimization Methods & Software
Mathematical Programming: Series A and B
Hi-index | 0.00 |
The convergence properties of the new regularized Euclidean residual method for solving general nonlinear least-squares and nonlinear equation problems are investigated. This method, derived from a proposal by Nesterov [Optim. Methods Softw., 22 (2007), pp. 469-483], uses a model of the objective function consisting of the unsquared Euclidean linearized residual regularized by a quadratic term. At variance with previous analysis, its convergence properties are here considered without assuming uniformly nonsingular globally Lipschitz continuous Jacobians nor an exact subproblem solution. It is proved that the method is globally convergent to first-order critical points and, under stronger assumptions, to roots of the underlying system of nonlinear equations. The rate of convergence is also shown to be quadratic under stronger assumptions.