A method for parameter calibration and relevance estimation in evolutionary algorithms
Proceedings of the 8th annual conference on Genetic and evolutionary computation
A general framework for statistical performance comparison of evolutionary computation algorithms
Information Sciences: an International Journal
Engineering Applications of Artificial Intelligence
Relevance estimation and value calibration of evolutionary algorithm parameters
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Reinforcement learning for online control of evolutionary algorithms
ESOA'06 Proceedings of the 4th international conference on Engineering self-organising systems
Parameter setting for evolutionary latent class clustering
ISICA'07 Proceedings of the 2nd international conference on Advances in computation and intelligence
Time-dependent performance comparison of evolutionary algorithms
ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
Computers and Industrial Engineering
INPUT: the intelligent parameter utilization tool
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
A meta-learning prediction model of algorithm performance for continuous optimization problems
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Is the meta-EA a viable optimization method?
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
This paper describes a statistical method that helps to find good parameter settings for evolutionary algorithms. The method builds a functional relationship between the algorithm's performance and its parameter values. This relationship-a statistical model-can be identified thanks to simulation data. Estimation and test procedures are used to evaluate the effect of parameter variation. In addition, good parameter settings can be investigated with a reduced number of experiments. Problem labeling can also be considered as a model variable and the method enables identifying classes of problems for which the algorithm behaves similarly. Defining such classes increases the quality of estimations without increasing the computational cost