On the limited memory BFGS method for large scale optimization
Mathematical Programming: Series A and B
Computational Optimization and Applications
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Remark on “Algorithm 500: Minimization of Unconstrained Multivariate Functions [E4]”
ACM Transactions on Mathematical Software (TOMS)
Algorithm 809: PREQN: Fortran 77 subroutines for preconditioning the conjugate gradient method
ACM Transactions on Mathematical Software (TOMS)
Automatic Preconditioning by Limited Memory Quasi-Newton Updating
SIAM Journal on Optimization
SIAM Journal on Optimization
Enriched Methods for Large-Scale Unconstrained Optimization
Computational Optimization and Applications
Perspectives in Flow Control and Optimization
Perspectives in Flow Control and Optimization
A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
SIAM Journal on Optimization
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent
ACM Transactions on Mathematical Software (TOMS)
Journal of Computational and Applied Mathematics
Computers & Mathematics with Applications
Mathematics and Computers in Simulation
Hi-index | 0.00 |
We compare the performance of several robust large-scale minimization algorithms for the unconstrained minimization of an ill-posed inverse problem. The parabolized Navier-Stokes equation model was used for adjoint parameter estimation. The methods compared consist of three versions of nonlinear conjugate-gradient (CG) method, quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS), the limited-memory quasi-Newton (L-BFGS) [D.C. Liu and J. Nocedal, On the limited memory BFGS method for large scale minimization, Math. Program. 45 (1989), pp. 503-528], truncated Newton (T-N) method [S.G. Nash, Preconditioning of truncated Newton methods, SIAM J. Sci. Stat. Comput. 6 (1985), pp. 599-616, S.G. Nash, Newton-type minimization via the Lanczos method, SIAM J. Numer. Anal. 21 (1984), pp. 770-788] and a new hybrid algorithm proposed by Morales and Nocedal [J.L. Morales and J. Nocedal, Enriched methods for large-scale unconstrained optimization, Comput. Optim. Appl. 21 (2002), pp. 143-154]. For all the methods employed and tested, the gradient of the cost function is obtained via an adjoint method. A detailed description of the algorithmic form of minimization algorithms employed in the minimization comparison is provided. For the inviscid case, the CG-descent method of Hager [W.W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and efficient line search, SIAM J. Optim. 16 (1) (2005), pp. 170-192] performed the best followed closely by the hybrid method [J.L. Morales and J. Nocedal, Enriched methods for large-scale unconstrained optimization, Comput. Optim. Appl. 21 (2002), pp. 143-154], while in the viscous case, the hybrid method emerged as the best performed followed by CG [D.F. Shanno and K.H. Phua, Remark on algorithm 500. Minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 6 (1980), pp. 618-622] and CG-descent [W.W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and efficient line search, SIAM J. Optim. 16 (1) (2005), pp. 170-192]. This required an adequate choice of parameters in the CG-descent method as well as controlling the number of L-BFGS and T-N iterations to be interlaced in the hybrid method.