Smooth optimization methods for minimax problems
SIAM Journal on Control and Optimization
A smooth method for the finite minimax problem
Mathematical Programming: Series A and B
The Minimization of Semicontinuous Functions: Mollifier Subgradients
SIAM Journal on Control and Optimization
SIAM Journal on Optimization
Smoothing Method for Minimax Problems
Computational Optimization and Applications
Approximating Subdifferentials by Random Sampling of Gradients
Mathematics of Operations Research
Portfolio Optimization Under a Minimax Rule
Management Science
Journal of Complexity
A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
SIAM Journal on Optimization
International Journal of Systems Science
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
A Derivative-Free Algorithm for Linearly Constrained Finite Minimax Problems
SIAM Journal on Optimization
Using Sampling and Simplex Derivatives in Pattern Search Methods
SIAM Journal on Optimization
Introduction to Derivative-Free Optimization
Introduction to Derivative-Free Optimization
A Nonderivative Version of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization
SIAM Journal on Optimization
Algorithm 909: NOMAD: Nonlinear Optimization with the MADS Algorithm
ACM Transactions on Mathematical Software (TOMS)
Convex Analysis and Monotone Operator Theory in Hilbert Spaces
Convex Analysis and Monotone Operator Theory in Hilbert Spaces
Derivative-free optimization methods for finite minimax problems
Optimization Methods & Software
Hi-index | 0.00 |
In this paper we present a derivative-free optimization algorithm for finite minimax problems. The algorithm calculates an approximate gradient for each of the active functions of the finite max function and uses these to generate an approximate subdifferential. The negative projection of 0 onto this set is used as a descent direction in an Armijo-like line search. We also present a robust version of the algorithm, which uses the `almost active' functions of the finite max function in the calculation of the approximate subdifferential. Convergence results are presented for both algorithms, showing that either f(x k )驴驴驴 or every cluster point is a Clarke stationary point. Theoretical and numerical results are presented for three specific approximate gradients: the simplex gradient, the centered simplex gradient and the Gupal estimate of the gradient of the Steklov averaged function. A performance comparison is made between the regular and robust algorithms, the three approximate gradients, and a regular and robust stopping condition.