Sequential Penalty Derivative-Free Methods for Nonlinear Constrained Optimization

  • Authors:
  • Giampaolo Liuzzi;Stefano Lucidi;Marco Sciandrone

  • Affiliations:
  • liuzzi@iasi.cnr.it;lucidi@dis.uniroma1.it;sciandro@dsi.unifi.it

  • Venue:
  • SIAM Journal on Optimization
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of minimizing a continuously differentiable function of several variables subject to smooth nonlinear constraints. We assume that the first order derivatives of the objective function and of the constraints can be neither calculated nor explicitly approximated. Hence, every minimization procedure must use only a suitable sampling of the problem functions. These problems arise in many industrial and scientific applications, and this motivates the increasing interest in studying derivative-free methods for their solution. The aim of the paper is to extend to a derivative-free context a sequential penalty approach for nonlinear programming. This approach consists in solving the original problem by a sequence of approximate minimizations of a merit function where penalization of constraint violation is progressively increased. In particular, under some standard assumptions, we introduce a general theoretical result regarding the connections between the sampling technique and the updating of the penalization which are able to guarantee convergence to stationary points of the constrained problem. On the basis of the general theoretical result, we propose a new method and prove its convergence to stationary points of the constrained problem. The computational behavior of the method has been evaluated both on a set of test problems and on a real application. The obtained results and the comparison with other well-known derivative-free software show the viability of the proposed sequential penalty approach.