A secant method for nonlinear least-squares minimization

  • Authors:
  • Wei Xu;Thomas F. Coleman;Gang Liu

  • Affiliations:
  • Department of Mathematics, Tongji University, Shanghai, China 200092;Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Canada N2L 3G1;Software School, Fudan University, Shanghai, China 200433

  • Venue:
  • Computational Optimization and Applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Quasi-Newton methods have played a prominent role, over many years, in the design of effective practical methods for the numerical solution of nonlinear minimization problems and in multi-dimensional zero-finding. There is a wide literature outlining the properties of these methods and illustrating their performance (e.g., Dennis and Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, 1996). In addition, most modern optimization libraries house a quasi-Newton collection of codes and they are widely used. The quasi-Newton contribution to practical nonlinear optimization is unchallenged.In this paper we propose and investigate an efficient quasi-Newton (secant) approach to the nonlinear least-squares problem, made practical due to the selective application of automatic differentiation (AD) technology. We also observe that AD technology can increase the efficiency of the standard quasi-Newton (positive definite secant) approach to the full nonlinear minimization approach to this problem and we compare these two AD-assisted methods. Finally, we compare the AD-assisted approaches to a standard globalized Gauss-Newton method.