Non-smoothness in classification problems
Optimization Methods & Software - THE JOINT EUROPT-OMS CONFERENCE ON OPTIMIZATION, 4-7 JULY, 2007, PRAGUE, CZECH REPUBLIC, PART I
Low Order-Value Optimization and applications
Journal of Global Optimization
Cutting-set methods for robust convex optimization with pessimizing oracles
Optimization Methods & Software
A Nonderivative Version of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization
SIAM Journal on Optimization
A Redistributed Proximal Bundle Method for Nonconvex Optimization
SIAM Journal on Optimization
Codifferential method for minimizing nonsmooth DC functions
Journal of Global Optimization
Computational Optimization and Applications
SIAM Journal on Imaging Sciences
Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems
Journal of Global Optimization
Positive trigonometric polynomials for strong stability of difference equations
Automatica (Journal of IFAC)
Submodular relaxation for MRFs with high-order potentials
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
A derivative-free approximate gradient sampling algorithm for finite minimax problems
Computational Optimization and Applications
A Riemannian subgradient algorithm for economic dispatch with valve-point effect
Journal of Computational and Applied Mathematics
Computational Optimization and Applications
Stationary-sparse causality network learning
The Journal of Machine Learning Research
A feasible SQP-GS algorithm for nonconvex, nonsmooth constrained optimization
Numerical Algorithms
Hi-index | 0.00 |
Let f be a continuous function on $\Rl^n$, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f is not differentiable. Of particular interest is the case where f is not convex, and perhaps not even locally Lipschitz, but is a function whose gradient is easily computed where it is defined. We present a practical, robust algorithm to locally minimize such functions, based on gradient sampling. No subgradient information is required by the algorithm.When f is locally Lipschitz and has bounded level sets, and the sampling radius $\eps$ is fixed, we show that, with probability 1, the algorithm generates a sequence with a cluster point that is Clarke $\eps$-stationary. Furthermore, we show that if f has a unique Clarke stationary point $\bar x$, then the set of all cluster points generated by the algorithm converges to $\bar x$ as $\eps$ is reduced to zero.Numerical results are presented demonstrating the robustness of the algorithm and its applicability in a wide variety of contexts, including cases where f is not locally Lipschitz at minimizers. We report approximate local minimizers for functions in the applications literature which have not, to our knowledge, been obtained previously. When the termination criteria of the algorithm are satisfied, a precise statement about nearness to Clarke $\eps$-stationarity is available. A MATLAB implementation of the algorithm is posted at http://www.cs.nyu.edu/overton/papers/gradsamp/alg.