Optimal speedup of Las Vegas algorithms
Information Processing Letters
Lipschitzian optimization without the Lipschitz constant
Journal of Optimization Theory and Applications
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
A perspective view and survey of meta-learning
Artificial Intelligence Review
Eighteenth national conference on Artificial intelligence
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Learning dynamic algorithm portfolios
Annals of Mathematics and Artificial Intelligence
Low-knowledge algorithm control
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
An asymptotically optimal algorithm for the max k-armed bandit problem
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
The max K-armed bandit: a new model of exploration applied to search heuristic selection
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Optimal algorithms for global optimization in case of unknown Lipschitz constant
Journal of Complexity - Special issue: Algorithms and complexity for continuous problems Schloss Dagstuhl, Germany, September 2004
A simple distribution-free approach to the max k-armed bandit problem
CP'06 Proceedings of the 12th international conference on Principles and Practice of Constraint Programming
Efficient multi-start strategies for local search algorithms
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Local search algorithms for global optimization often suffer from getting trapped in a local optimum. The common solution for this problem is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their behavior. Hence, a multi-start strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose a consistent multi-start strategy that assumes a convergence rate of the local search algorithm up to an unknown constant, and in every phase gives preference to those instances that could converge to the best value for a particular range of the constant. Combined with the local search algorithm SPSA (Simultaneous Perturbation Stochastic Approximation), the strategy performs remarkably well in practice, both on synthetic tasks and on tuning the parameters of learning algorithms.