A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimisation: NSGA-II
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
Performance assessment of multiobjective optimizers: an analysis and review
IEEE Transactions on Evolutionary Computation
On the hardness of offline multi-objective optimization
Evolutionary Computation
Meta-Modeling in Multiobjective Optimization
Multiobjective Optimization
Radar waveform optimisation as a many-objective application benchmark
EMO'07 Proceedings of the 4th international conference on Evolutionary multi-criterion optimization
Hi-index | 0.00 |
This paper introduces a new metric vector for assessing the performance of different multi-objective algorithms, relative to the range of performance expected from a random search. The metric requires an ensemble of repeated trials to be performed, reducing the chance of overly favourable results. The random search baseline for the function-under-test may be either analytic, or created from a Monte-Carlo process: thus the metric is repeatable and accurate. The metric allows both the median and worst performance of different algorithms to be compared directly, and scales well with high-dimensional many-objective problems. The metric quantifies and is sensitive to the distance of the solutions to the Pareto set, the distribution of points across the set, and the repeatability of the trials. Both the Monte-Carlo and closed form analysis methods will provide accurate analytic confidence intervals on the observed results.