Multiobjective Optimization Using Evolutionary Algorithms - A Comparative Case Study
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
Why Quality Assessment Of Multiobjective Optimizers Is Difficult
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Inferential Performance Assessment of Stochastic Optimisers and the Attainment Function
EMO '01 Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization
Comparison of Multiobjective Evolutionary Algorithms: Empirical Results
Evolutionary Computation
A Study of Convergence Speed in Multi-objective Metaheuristics
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
A Set of Test Cases for Performance Measures in Multiobjective Optimization
MICAI '08 Proceedings of the 7th Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Quality Assessment of Pareto Set Approximations
Multiobjective Optimization
G-indicator: an m-ary quality indicator for the evaluation of non-dominated sets
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
Graph partitioning by multi-objective real-valued metaheuristics: A comparative study
Applied Soft Computing
Multiobjective evolutionary algorithms: a comparative case studyand the strength Pareto approach
IEEE Transactions on Evolutionary Computation
A fast and elitist multiobjective genetic algorithm: NSGA-II
IEEE Transactions on Evolutionary Computation
Performance assessment of multiobjective optimizers: an analysis and review
IEEE Transactions on Evolutionary Computation
Hi-index | 0.01 |
There is still a big question to the community of multi-objective optimization: how to compare effectively the performances of multi-objective stochastic optimizers? The existing metrics suffer from different drawbacks to address this question. In this article, three convergence-based M-ary cardinal metrics are proposed, based on different forms of dominance relations between two solutions, for comparing performances of two optimizers from their multiple runs. The metrics are first tested on some benchmark instances whose performances are already known, and then their outcomes for some other instances are compared with those of three existing metrics.