Randomization tests
A Bayesian Approach to Tackling Hard Computational Problems
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Stochastic Local Search: Foundations & Applications
Stochastic Local Search: Foundations & Applications
A method for parameter calibration and relevance estimation in evolutionary algorithms
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Adaptability of Algorithms for Real-Valued Optimization
EvoWorkshops '09 Proceedings of the EvoWorkshops 2009 on Applications of Evolutionary Computing: EvoCOMNET, EvoENVIRONMENT, EvoFIN, EvoGAMES, EvoHOT, EvoIASP, EvoINTERACTION, EvoMUSART, EvoNUM, EvoSTOC, EvoTRANSLOG
Automatic algorithm configuration based on local search
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
SATzilla: portfolio-based algorithm selection for SAT
Journal of Artificial Intelligence Research
Adaptive problem-solving for large-scale scheduling problems: a case study
Journal of Artificial Intelligence Research
Relevance estimation and value calibration of evolutionary algorithm parameters
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
ParamILS: an automatic algorithm configuration framework
Journal of Artificial Intelligence Research
Performance prediction and automated tuning of randomized and parametric algorithms
CP'06 Proceedings of the 12th international conference on Principles and Practice of Constraint Programming
Review: Measuring instance difficulty for combinatorial optimization problems
Computers and Operations Research
Discovering the suitability of optimisation algorithms by learning from evolved instances
Annals of Mathematics and Artificial Intelligence
Generalising algorithm performance in instance space: a timetabling case study
LION'05 Proceedings of the 5th international conference on Learning and Intelligent Optimization
LION'12 Proceedings of the 6th international conference on Learning and Intelligent Optimization
Hi-index | 0.00 |
The chief purpose of research in optimisation is to understand how to design (or choose) the most suitable algorithm for a given distribution of problem instances. Ideally, when an algorithm is developed for specific problems, the boundaries of its performance should be clear, and we expect estimates of reasonably good performance within and (at least modestly) outside its 'seen' instance distribution. However, we show that these ideals are highly over-optimistic, and suggest that standard algorithm-choice scenarios will rarely lead to the best algorithm for individual instances in the space of interest. We do this by examining algorithm 'footprints', indicating how performance generalises in instance space. We find much evidence that typical ways of choosing the 'best' algorithm, via tests over a distribution of instances, are seriously flawed. Also, understanding how footprints in instance spaces vary between algorithms and across instance space dimensions, may lead to a future platform for wiser algorithm-choice decisions.