A truncated Newton method with nonmonotone line search for unconstrained optimization
Journal of Optimization Theory and Applications
Line search algorithms with guaranteed sufficient decrease
ACM Transactions on Mathematical Software (TOMS)
Testing Unconstrained Optimization Software
ACM Transactions on Mathematical Software (TOMS)
Hi-index | 0.00 |
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.