Theoretical Computer Science - Natural computing
Multiobjective evolutionary algorithms: classifications, analyses, and new innovations
Multiobjective evolutionary algorithms: classifications, analyses, and new innovations
Multi-objective genetic algorithms: Problem difficulties and construction of test problems
Evolutionary Computation
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
Free lunches in pareto coevolution
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Bernoulli's principle of insufficient reason and conservation of information in computer search
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Evo'08 Proceedings of the 2008 conference on Applications of evolutionary computing
A No Free Lunch theorem for multi-objective optimization
Information Processing Letters
Dynamic classifier ensemble model for customer classification with imbalanced class distribution
Expert Systems with Applications: An International Journal
Exploiting comparative studies using criteria: generating knowledge from an analyst's perspective
EMO'05 Proceedings of the Third international conference on Evolutionary Multi-Criterion Optimization
Evolutionary Computation
Hi-index | 0.00 |
The classic NFL theorems are invariably cast in terms of single objective optimization problems. We confirm that the classic NFL theorem holds for general multiobjective fitness spaces, and show how this follows from a 'single-objective' NFL theorem. We also show that, given any particular Pareto Front, an NFL theorem holds for the set of all multiobjective problems which have that Pareto Front. It follows that, given any 'shape' or class of Pareto fronts, an NFL theorem holds for the set of all multiobjective problems in that class. These findings have salience in test function design. Such NFL results are cast in the typical context of absolute performance, assuming a performance metric which returns a value based on the result produced by a single algorithm. But, in multiobjective search we commonly use comparative metrics, which return performance measures based non-trivially on the results from two (or more) algorithms. Closely related to but extending the observations in the seminal NFL work concerning minimax distinctions between algorithms, we provide a 'Free Leftovers' theorem for comparative performance of algorithms over permutation functions; in words: over the space of permutation problems, every algorithm has some companion algorithm(s) which it outperforms, according to a certain well-behaved metric, when comparative performance is summed over all problems in the space.