An algorithm for composite nonsmooth optimization problems
Journal of Optimization Theory and Applications
A collection of test problems for constrained global optimization algorithms
A collection of test problems for constrained global optimization algorithms
A Gauss-Newton method for convex composite optimization
Mathematical Programming: Series A and B
Convergence of Newton's method and inverse function theorem in Banach space
Mathematics of Computation
STRSCNE: A Scaled Trust-Region Solver for Constrained Nonlinear Equations
Computational Optimization and Applications
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Majorizing Functions and Convergence of the Gauss-Newton Method for Convex Composite Optimization
SIAM Journal on Optimization
Kantorovich's majorants principle for Newton's method
Computational Optimization and Applications
Variational Methods in Imaging
Variational Methods in Imaging
Solving structured sparsity regularization with proximal methods
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
SIAM Journal on Scientific Computing
Iterative Methods for Approximate Solution of Inverse Problems
Iterative Methods for Approximate Solution of Inverse Problems
Hi-index | 0.00 |
An extension of the Gauss-Newton algorithm is proposed to find local minimizers of penalized nonlinear least squares problems, under generalized Lipschitz assumptions. Convergence results of local type are obtained, as well as an estimate of the radius of the convergence ball. Some applications for solving constrained nonlinear equations are discussed and the numerical performance of the method is assessed on some significant test problems.