Some numerical experiments with variable-storage quasi-Newton algorithms
Mathematical Programming: Series A and B
A nonsmooth version of Newton's method
Mathematical Programming: Series A and B
Convergence of some algorithms for convex minimization
Mathematical Programming: Series A and B - Special issue: Festschrift in Honor of Philip Wolfe part II: studies in nonlinear programming
CUTE: constrained and unconstrained testing environment
ACM Transactions on Mathematical Software (TOMS)
Variable metric bundle methods: from conceptual to implementable forms
Mathematical Programming: Series A and B - Special issue on computational nonsmooth optimization
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Self-adaptive inexact proximal point methods
Computational Optimization and Applications
Hi-index | 0.00 |
The proximal method is a standard regularization approach in optimization. Practical implementations of this algorithm require (i) an algorithm to compute the proximal point, (ii) a rule to stop this algorithm, (iii) an update formula for the proximal parameter. In this work we focus on (ii), when smoothness is present--so that Newton-like methods can be used for (i): we aim at giving adequate stopping rules to reach overall efficiency of the method.Roughly speaking, usual rules consist in stopping inner iterations when the current iterate is close to the proximal point. By contrast, we use the standard paradigm of numerical optimization: the basis for our stopping test is a "sufficient" decrease of the objective function, namely a fraction of the ideal decrease. We establish convergence of the algorithm thus obtained and we illustrate it on some ill-conditioned problems. The experiments show that combining the proposed inexact proximal scheme with a standard smooth optimization algorithm improves the numerical behaviour of the latter for those ill-conditioned problems.