Journal of Optimization Theory and Applications
The gradient projection method using Curry's steplength
SIAM Journal on Control and Optimization
Projected gradient methods for linearly constrained problems
Mathematical Programming: Series A and B
On the convergence of projected gradient processes to singular critical points
Journal of Optimization Theory and Applications
Global convergence of a class of trust region algorithms for optimization with simple bounds
SIAM Journal on Numerical Analysis
Convergence properties of trust region methods for linear and convex constraints
Mathematical Programming: Series A and B
A subspace decomposition principle for scaled gradient projection methods: global theory
SIAM Journal on Control and Optimization
On the linear convergence of descent methods for convex essentially smooth minimization
SIAM Journal on Control and Optimization
On the convergence of the coordinate descent method for convex differentiable minimization
Journal of Optimization Theory and Applications
Convergence of the steepest descent method for minimizing quasiconvex functions
Journal of Optimization Theory and Applications
Modified Projection-Type Methods for Monotone Variational Inequalities
SIAM Journal on Control and Optimization
Convergence properties of nonmonotone spectral projected gradient methods
Journal of Computational and Applied Mathematics
Mathematics of Operations Research
Convergence properties of nonmonotone spectral projected gradient methods
Journal of Computational and Applied Mathematics
Continuous Multiclass Labeling Approaches and Algorithms
SIAM Journal on Imaging Sciences
Hi-index | 0.00 |
This paper develops convergence theory of the gradient projection method by Calamai and Moré (Math. Programming, vol. 39, 93–116, 1987) which, for minimizing a continuously differentiable optimization problem min{f(x) : x ∈ Ω} where Ω is a nonempty closed convex set, generates a sequence x_k+1 = P(x_k − α_k ∇ f(x_k)) where the stepsize α_k 0 is chosen suitably. It is shown that, when f(x) is a pseudo-convex (quasi-convex) function, this method has strong convergence results: either x_k → x^* and x^* is a minimizer (stationary point); or ‖x_k‖ → ∞ arg min{f(x) : x ∈ Ω} = ∅, and f(x_k) ↓ inf{f(x) : x ∈ Ω}.