Convergence of an annealing algorithm
Mathematical Programming: Series A and B
Sequential stopping rules for the multistart algorithm in global optimisation
Mathematical Programming: Series A and B
Bayesian stopping rules for multistart global optimization methods
Mathematical Programming: Series A and B
Cooling schedules for optimal annealing
Mathematics of Operations Research
Global optimization and simulated annealing
Mathematical Programming: Series A and B
Pure adaptive search in global optimization
Mathematical Programming: Series A and B
Hesitant adaptive search for global optimisation
Mathematical Programming: Series A and B
A Theoretical Approach to Restart in Global Optimization
Journal of Global Optimization
Convergence of a Simulated Annealing Algorithm for Continuous Global Optimization
Journal of Global Optimization
A Metaheuristic for the Pickup and Delivery Problem with Time Windows
ICTAI '01 Proceedings of the 13th IEEE International Conference on Tools with Artificial Intelligence
Journal of Global Optimization
Analysis and development of stopping criteria for stochastic global optimization algorithms
Analysis and development of stopping criteria for stochastic global optimization algorithms
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm
IEEE Transactions on Neural Networks
Efficient multi-start strategies for local search algorithms
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Two common questions when one uses a stochastic global optimization algorithm, e.g., simulated annealing, are when to stop a single run of the algorithm, and whether to restart with a new run or terminate the entire algorithm. In this paper, we develop a stopping and restarting strategy that considers tradeoffs between the computational effort and the probability of obtaining the global optimum. The analysis is based on a stochastic process called Hesitant Adaptive Search with Power-Law Improvement Distribution (HASPLID). HASPLID models the behavior of stochastic optimization algorithms, and motivates an implementable framework, Dynamic Multistart Sequential Search (DMSS). We demonstrate here the practicality of DMSS by using it to govern the application of a simple local search heuristic on three test problems from the global optimization literature.