Using Experimental Design to Find Effective Parameter Settings for Heuristics
Journal of Heuristics
A Racing Algorithm for Configuring Metaheuristics
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Scaling and Probabilistic Smoothing: Efficient Dynamic Local Search for SAT
CP '02 Proceedings of the 8th International Conference on Principles and Practice of Constraint Programming
Global Optimization of Stochastic Black-Box Systems via Sequential Kriging Meta-Models
Journal of Global Optimization
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Experimental Research in Evolutionary Computation: The New Experimentalism (Natural Computing Series)
Finding Optimal Algorithmic Parameters Using Derivative-Free Optimization
SIAM Journal on Optimization
Fine-Tuning of Algorithms Using Fractional Experimental Designs and Local Search
Operations Research
An experimental investigation of model-based parameter optimisation: SPO and beyond
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Automatic algorithm configuration based on local search
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
ParamILS: an automatic algorithm configuration framework
Journal of Artificial Intelligence Research
Improvement strategies for the F-Race algorithm: sampling design and iterative refinement
HM'07 Proceedings of the 4th international conference on Hybrid metaheuristics
A gender-based genetic algorithm for the automatic configuration of algorithms
CP'09 Proceedings of the 15th international conference on Principles and practice of constraint programming
UBCSAT: an implementation and experimentation environment for SLS algorithms for SAT and MAX-SAT
SAT'04 Proceedings of the 7th international conference on Theory and Applications of Satisfiability Testing
Automatic and interactive tuning of algorithms
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Tradeoffs in the empirical evaluation of competing algorithm designs
Annals of Mathematics and Artificial Intelligence
Automated configuration of mixed integer programming solvers
CPAIOR'10 Proceedings of the 7th international conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems
Fine-Tuning algorithm parameters using the design of experiments approach
LION'05 Proceedings of the 5th international conference on Learning and Intelligent Optimization
Sequential model-based optimization for general algorithm configuration
LION'05 Proceedings of the 5th international conference on Learning and Intelligent Optimization
An evaluation of sequential model-based optimization for expensive blackbox functions
Proceedings of the 15th annual conference companion on Genetic and evolutionary computation
Algorithm runtime prediction: Methods & evaluation
Artificial Intelligence
A beginner's guide to tuning methods
Applied Soft Computing
Hi-index | 0.00 |
The optimization of algorithm performance by automatically identifying good parameter settings is an important problem that has recently attracted much attention in the discrete optimization community. One promising approach constructs predictive performance models and uses them to focus attention on promising regions of a design space. Such methods have become quite sophisticated and have achieved significant successes on other problems, particularly in experimental design applications. However, they have typically been designed to achieve good performance only under a budget expressed as a number of function evaluations (e.g., target algorithm runs). In this work, we show how to extend the Sequential Parameter Optimization framework [SPO; see 5] to operate effectively under time bounds. Our methods take into account both the varying amount of time required for different algorithm runs and the complexity of model building and evaluation; they are particularly useful for minimizing target algorithm runtime. Specifically, we avoid the up-front cost of an initial design, introduce a time-bounded intensification mechanism, and show how to reduce the overhead incurred by constructing and using models. Overall, we show that our method represents a new state of the art in model-based optimization of algorithms with continuous parameters on single problem instances.