Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Multi-step quasi-Newton methods for optimization
ICCAM'92 Proceedings of the fifth international conference on Computational and applied mathematics
Using function-values in multi-step quasi-Newton methods
Proceedings of the 6th international congress on Computational and applied mathematics
Alternating multi-step quasi-Newton methods for unconstrained optimization
ICCAM '96 Proceedings of the seventh international congress on Computational and applied mathematics
Testing Unconstrained Optimization Software
ACM Transactions on Mathematical Software (TOMS)
A new two-step gradient-type method for large-scale unconstrained optimization
Computers & Mathematics with Applications
An improved multi-step gradient-type method for large scale optimization
Computers & Mathematics with Applications
Hi-index | 0.00 |
Multi-step quasi-Newton methods for optimisation (using data from more than one previous step to revise the current approximate Hessian) were introduced by Ford and Moghrabi in (J. Comput. Appl. Math. 50 (1994) 305), where they showed how to construct such methods by means of interpolating curves. These methods also utilise standard quasi-Newton formulae, but with the vectors normally employed in the formulae replaced by others determined from a multi-step version of the secant equation. Some methods (the 'accumulative' and 'fixed-point' approaches) for defining the parameter values, which correspond to the iterates on the interpolating curve, were presented by Ford and Moghrabi in (Optim. Methods Software 2 (1993) 357). Both the accumulative and the fixed-point methods measure the distances required to parameterise the interpolating polynomials via a norm defined by a positive-definite matrix M. The fixed-point algorithm which takes M to be the current approximate Hessian was found, experimentally, to be the best of the six multi-step methods studied in Ford and Moghrabi (1993) (all of which exhibited improved numerical performance by comparison with the standard single-step BFGS method).To produce a better parameterisation of the interpolation, Ford (Comput. Math. Appl. 42 (2001) 1083) developed the idea of 'implicit update' methods. The fundamental concept here is to determine an 'improved' version of the Hessian approximation to be used in computing the metric, while avoiding the computational expense of actually calculating the improved version. Two implicit methods (denoted by I2 and I3) were developed from F2 in Ford (2001). The method I2 employed parameter values generated from an implicit single-step BFGS update, while I3 used values from an implicit two-step update. In this paper, we describe the derivation of new implicit updates which are similar to I3. The experimental results we present show that one of the new implicit methods produces markedly better performance than the existing implicit methods, particularly as the dimension of the test problem grows.