Proceedings of the third international conference on Genetic algorithms
Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Computer rendering of stochastic models
Communications of the ACM
An analysis of the behavior of a class of genetic adaptive systems.
An analysis of the behavior of a class of genetic adaptive systems.
Benchmarking evolutionary and hybrid algorithms using randomized self-similar landscapes
SEAL'06 Proceedings of the 6th international conference on Simulated Evolution And Learning
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
An Analysis of Locust Swarms on Large Scale Global Optimization Problems
ACAL '09 Proceedings of the 4th Australian Conference on Artificial Life: Borrowing from Biology
Optimization in Fractal and Fractured Landscapes Using Locust Swarms
ACAL '09 Proceedings of the 4th Australian Conference on Artificial Life: Borrowing from Biology
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
Proceedings of the 13th annual conference on Genetic and evolutionary computation
The differential ant-stigmergy algorithm
Information Sciences: an International Journal
Hi-index | 0.00 |
Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.