Modified Two-Point Stepsize Gradient Methods for Unconstrained Optimization

  • Authors:
  • Yuhong Dai;Jinyun Yuan;Ya-Xiang Yuan

  • Affiliations:
  • LSEC, Institute of Computational Mathematics, Academy of Mathematics and System Sciences, Chinese Academy of Sciences, Beijing 100080, People's Republic of China. dyh@lsec.cc.ac.cnjin@mat.ufpr.br;LSEC, Institute of Computational Mathematics, Academy of Mathematics and System Sciences, Chinese Academy of Sciences, Beijing 100080, People's Republic of China. yyx@lsec.cc.ac.cn

  • Venue:
  • Computational Optimization and Applications
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved.