Convergence of the Gradient Projection Method for Generalized Convex Minimization

  • Authors:
  • Changyu Wang;Naihua Xiu

  • Affiliations:
  • Operations Research Center, Qufu Teachers' University, Qufu&semi/ or Institute of Applied Mathematics, Academia Sinica, Beijing 100080, or Dept. of Applied Mathematics, Dalian University of Techno ...;Institute of Applied Mathematics, Academia Sinica, Beijing 100080&semi/ or Department of Applied Mathematics, Northern Jiaotong University, Beijing 100044, People's Republic of China

  • Venue:
  • Computational Optimization and Applications
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper develops convergence theory of the gradient projection method by Calamai and Moré (Math. Programming, vol. 39, 93–116, 1987) which, for minimizing a continuously differentiable optimization problem min{f(x) : x ∈ Ω} where Ω is a nonempty closed convex set, generates a sequence x_k+1 = P(x_k − α_k ∇ f(x_k)) where the stepsize α_k 0 is chosen suitably. It is shown that, when f(x) is a pseudo-convex (quasi-convex) function, this method has strong convergence results: either x_k → x^* and x^* is a minimizer (stationary point); or ‖x_k‖ → ∞ arg min{f(x) : x ∈ Ω} = ∅, and f(x_k) ↓ inf{f(x) : x ∈ Ω}.