Stochastic global optimization methods. part 1: clustering methods
Mathematical Programming: Series A and B
Stochastic global optimization methods. part 11: multi level methods
Mathematical Programming: Series A and B
Bayesian stopping rules for multistart global optimization methods
Mathematical Programming: Series A and B
Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
A filled function method for finding a global minimizer of a function of several variables
Mathematical Programming: Series A and B
The globally convexized filled functions for global optimization
Applied Mathematics and Computation
Terminal Repeller Unconstrained Subenergy Tunneling (TRUST) for fast global optimization
Journal of Optimization Theory and Applications
Recent developments and trends in global optimization
Journal of Computational and Applied Mathematics - Special issue on numerical analysis 2000 Vol. IV: optimization and nonlinear equations
A Sequential Convexification Method (SCM) for Continuous Global Optimization
Journal of Global Optimization
Isotropic Effective Energy Simulated Annealing Searches for Low Energy Molecular Cluster States
Isotropic Effective Energy Simulated Annealing Searches for Low Energy Molecular Cluster States
A new filled function applied to global optimization
Computers and Operations Research
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
On the computation of all global minimizers through particle swarm optimization
IEEE Transactions on Evolutionary Computation
Hybridization of gradient descent algorithms with dynamic tunnelingmethods for global optimization
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.98 |
A new auxiliary function method based on the idea which executes a two-stage deterministic search for global optimization is proposed. Specifically, a local minimum of the original function is first obtained, and then a stretching function technique is used to modify the objective function with respect to the obtained local minimum. The transformed function stretches the function values higher than the obtained minimum upward while it keeps the ones with lower values unchanged. Next, an auxiliary function is constructed on the stretched function, which always descends in the region where the function values are higher than the obtained minimum, and it has a stationary point in the lower area. We optimize the auxiliary function and use the found stationary point as the starting point to turn to the first step to restart the search. Repeat the procedure until termination. A theoretical analysis is also made. The main feature of the new method is that it relaxes significantly the requirements for the parameters. Numerical experiments on benchmark functions with different dimensions (up to 50) demonstrate that the new algorithm has a more rapid convergence and a higher success rate, and can find the solutions with higher quality, compared with some other existing similar algorithms, which is consistent with the analysis in theory.