Vector quantization and signal compression
Vector quantization and signal compression
Optimal speedup of Las Vegas algorithms
Information Processing Letters
Lipschitzian optimization without the Lipschitz constant
Journal of Optimization Theory and Applications
Towards a characterisation of the behaviour of stochastic local search algorithms for SAT
Artificial Intelligence
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
A perspective view and survey of meta-learning
Artificial Intelligence Review
Eighteenth national conference on Artificial intelligence
Experimental Research in Evolutionary Computation: The New Experimentalism (Natural Computing Series)
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Learning dynamic algorithm portfolios
Annals of Mathematics and Artificial Intelligence
k-means++: the advantages of careful seeding
SODA '07 Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms
Reactive Search and Intelligent Optimization
Reactive Search and Intelligent Optimization
Low-knowledge algorithm control
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
An asymptotically optimal algorithm for the max k-armed bandit problem
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
On the Use of Run Time Distributions to Evaluate and Compare Stochastic Local Search Algorithms
SLS '09 Proceedings of the Second International Workshop on Engineering Stochastic Local Search Algorithms. Designing, Implementing and Analyzing Effective Heuristics
Efficient Multi-start Strategies for Local Search Algorithms
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
The max K-armed bandit: a new model of exploration applied to search heuristic selection
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Stopping and restarting strategy for stochastic sequential search in global optimization
Journal of Global Optimization
Optimal algorithms for global optimization in case of unknown Lipschitz constant
Journal of Complexity - Special issue: Algorithms and complexity for continuous problems Schloss Dagstuhl, Germany, September 2004
ParamILS: an automatic algorithm configuration framework
Journal of Artificial Intelligence Research
Algorithm selection as a bandit problem with unbounded losses
LION'10 Proceedings of the 4th international conference on Learning and intelligent optimization
A simple distribution-free approach to the max k-armed bandit problem
CP'06 Proceedings of the 12th international conference on Principles and Practice of Constraint Programming
Hi-index | 0.00 |
Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their behavior. Hence, a multi-start strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose multi-start strategies motivated by works on multi-armed bandit problems and Lipschitz optimization with an unknown constant. The strategies continuously estimate the potential performance of each algorithm instance by supposing a convergence rate of the local search algorithm up to an unknown constant, and in every phase allocate resources to those instances that could converge to the optimum for a particular range of the constant. Asymptotic bounds are given on the performance of the strategies. In particular, we prove that at most a quadratic increase in the number of times the target function is evaluated is needed to achieve the performance of a local search algorithm started from the attraction region of the optimum. Experiments are provided using SPSA (Simultaneous Perturbation Stochastic Approximation) and kmeans as local search algorithms, and the results indicate that the proposed strategies work well in practice, and, in all cases studied, need only logarithmically more evaluations of the target function as opposed to the theoretically suggested quadratic increase.