Random tunneling by means of acceptance-rejection sampling for global optimization
Journal of Optimization Theory and Applications
A deterministic algorithm for global optimization
Mathematical Programming: Series A and B
Lipschitzian optimization without the Lipschitz constant
Journal of Optimization Theory and Applications
Global one-dimensional optimization using smooth auxiliary functions
Mathematical Programming: Series A and B
Acceleration Tools for Diagonal Information Global Optimization Algorithms
Computational Optimization and Applications
A Global Optimization Algorithm for Multivariate Functionswith Lipschitzian First Derivatives
Journal of Global Optimization
A Locally-Biased form of the DIRECT Algorithm
Journal of Global Optimization
Global Search Based on Efficient Diagonal Partitions and a Set of Lipschitz Constants
SIAM Journal on Optimization
Additive Scaling and the DIRECT Algorithm
Journal of Global Optimization
Global Optimization with Non-Convex Constraints - Sequential and Parallel Algorithms (Nonconvex Optimization and its Applications Volume 45) (Nonconvex Optimization and Its Applications)
Introduction to Global Optimization (Nonconvex Optimization and Its Applications)
Introduction to Global Optimization (Nonconvex Optimization and Its Applications)
Lipschitz gradients for global optimization in a one-point-based partitioning scheme
Journal of Computational and Applied Mathematics
An approach to constrained global optimization based on exact penalty functions
Journal of Global Optimization
Simplicial Lipschitz optimization without the Lipschitz constant
Journal of Global Optimization
Hi-index | 0.00 |
This paper is devoted to the study of partition-based deterministic algorithms for global optimization of Lipschitz-continuous functions without requiring knowledge of the Lipschitz constant. First we introduce a general scheme of a partition-based algorithm. Then, we focus on the selection strategy in such a way to exploit the information on the objective function. We propose two strategies. The first one is based on the knowledge of the global optimum value of the objective function. In this case the selection strategy is able to guarantee convergence of every infinite sequence of trial points to global minimum points. The second one does not require any a priori knowledge on the objective function and tries to exploit information on the objective function gathered during progress of the algorithm. In this case, from a theoretical point of view, we can guarantee the so-called every-where dense convergence of the algorithm.