Optimal robust expensive optimization is tractable
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Biasing Monte-Carlo simulations through RAVE values
CG'10 Proceedings of the 7th international conference on Computers and games
Metaheuristic optimization: algorithm analysis and open problems
SEA'11 Proceedings of the 10th international conference on Experimental algorithms
Bandit-Based genetic programming
EuroGP'10 Proceedings of the 13th European conference on Genetic Programming
LION'05 Proceedings of the 5th international conference on Learning and Intelligent Optimization
A rigorous runtime analysis for quasi-random restarts and decreasing stepsize
EA'11 Proceedings of the 10th international conference on Artificial Evolution
Evolutionary Computation
Hi-index | 0.00 |
This paper analyses extensions of No-Free-Lunch (NFL) theorems to countably infinite and uncountable infinite domains and investigates the design of optimal optimization algorithms. The original NFL theorem due to Wolpert and Macready states that, for finite search domains, all search heuristics have the same performance when averaged over the uniform distribution over all possible functions. For infinite domains the extension of the concept of distribution over all possible functions involves measurability issues and stochastic process theory. For countably infinite domains, we prove that the natural extension of NFL theorems, for the current formalization of probability, does not hold, but that a weaker form of NFL does hold, by stating the existence of non-trivial distributions of fitness leading to equal performances for all search heuristics. Our main result is that for continuous domains, NFL does not hold. This free-lunch theorem is based on the formalization of the concept of random fitness functions by means of random fields. We also consider the design of optimal optimization algorithms for a given random field, in a black-box setting, namely, a complexity measure based solely on the number of requests to the fitness function. We derive an optimal algorithm based on Bellman’s decomposition principle, for a given number of iterates and a given distribution of fitness functions. We also approximate this algorithm thanks to a Monte-Carlo planning algorithm close to the UCT (Upper Confidence Trees) algorithm, and provide experimental results.