A generalization of the proximal point algorithm
SIAM Journal on Control and Optimization
CUTE: constrained and unconstrained testing environment
ACM Transactions on Mathematical Software (TOMS)
SIAM Journal on Optimization
Proximal Point Methods and Nonconvex Optimization
Journal of Global Optimization
Inexact Variants of the Proximal Point Algorithm without Monotonicity
SIAM Journal on Optimization
Local Convergence of the Proximal Point Algorithm and Multiplier Methods Without Monotonicity
Mathematics of Operations Research
Regularized Newton Methods for Convex Minimization Problems with Singular Solutions
Computational Optimization and Applications
A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
SIAM Journal on Optimization
Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent
ACM Transactions on Mathematical Software (TOMS)
Computational Optimization and Applications
A Derivative-Free Algorithm for Least-Squares Minimization
SIAM Journal on Optimization
On the local convergence of a derivative-free algorithm for least-squares minimization
Computational Optimization and Applications
Interior proximal methods for quasiconvex optimization
Journal of Global Optimization
Descentwise inexact proximal algorithms for smooth optimization
Computational Optimization and Applications
Computational Optimization and Applications
Hi-index | 0.00 |
We propose a class of self-adaptive proximal point methods suitable for degenerate optimization problems where multiple minimizers may exist, or where the Hessian may be singular at a local minimizer. If the proximal regularization parameter has the form $\mu({\bf{x}})=\beta\|\nabla f({\bf{x}})\|^{\eta}$ where 驴驴[0,2) and β0 is a constant, we obtain convergence to the set of minimizers that is linear for 驴=0 and β sufficiently small, superlinear for 驴驴(0,1), and at least quadratic for 驴驴[1,2). Two different acceptance criteria for an approximate solution to the proximal problem are analyzed. These criteria are expressed in terms of the gradient of the proximal function, the gradient of the original function, and the iteration difference. With either acceptance criterion, the convergence results are analogous to those of the exact iterates. Preliminary numerical results are presented using some ill-conditioned CUTE test problems.