Verification methods for nonlinear equations with saddle point functions
Journal of Computational and Applied Mathematics - Special issue: Proceedings of the 6th Japan--China joint seminar on numerical mathematics, university of Tsukuba, Japan, 5-9 August 2002
IEEE Transactions on Knowledge and Data Engineering
A practical update criterion for SQP method
Optimization Methods & Software
Calibrating Least Squares Semidefinite Programming with Equality and Inequality Constraints
SIAM Journal on Matrix Analysis and Applications
Convergence of an inexact generalized Newton method with a scaled residual control
Computers & Mathematics with Applications
Smoothing a program soundly and robustly
CAV'11 Proceedings of the 23rd international conference on Computer aided verification
Newton-type methods for stochastic programming
Mathematical and Computer Modelling: An International Journal
On superlinear convergence of quasi-Newton methods for nonsmooth equations
Operations Research Letters
A regularized limited memory BFGS method for nonconvex unconstrained minimization
Numerical Algorithms
Hi-index | 0.00 |
This paper proposes a BFGS-SQP method for linearly constrained optimization where the objective function $f$ is required only to have a Lipschitz gradient. The Karush--Kuhn--Tucker system of the problem is equivalent to a system of nonsmooth equations $F(v)=0$. At every step a quasi-Newton matrix is updated if $\|F(v_k)\|$ satisfies a rule. This method converges globally, and the rate of convergence is superlinear when $f$ is twice strongly differentiable at a solution of the optimization problem. No assumptions on the constraints are required. This generalizes the classical convergence theory of the BFGS method, which requires a twice continuous differentiability assumption on the objective function. Applications to stochastic programs with recourse on a CM5 parallel computer are discussed.