On Finding the Maxima of a Set of Vectors
Journal of the ACM (JACM)
Evolution and Optimum Seeking: The Sixth Generation
Evolution and Optimum Seeking: The Sixth Generation
Multi-Objective Optimization Using Evolutionary Algorithms
Multi-Objective Optimization Using Evolutionary Algorithms
Combining convergence and diversity in evolutionary multiobjective optimization
Evolutionary Computation
Multiobjective Optimization Using Evolutionary Algorithms - A Comparative Case Study
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
Stochastic Local Search: Foundations & Applications
Stochastic Local Search: Foundations & Applications
Towards a self-stopping evolutionary algorithm using coupling from the past
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Comparison of Multiobjective Evolutionary Algorithms: Empirical Results
Evolutionary Computation
A cumulative evidential stopping criterion for multiobjective optimization evolutionary algorithms
Proceedings of the 9th annual conference companion on Genetic and evolutionary computation
Handbook of Parametric and Nonparametric Statistical Procedures
Handbook of Parametric and Nonparametric Statistical Procedures
A stopping criterion based on Kalman estimation techniques with several progress indicators
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
S-metric calculation by considering dominated hypervolume as klee's measure problem
Evolutionary Computation
Online convergence detection for multiobjective aerodynamic applications
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Pareto-, aggregation-, and indicator-based methods in many-objective optimization
EMO'07 Proceedings of the 4th international conference on Evolutionary multi-criterion optimization
A hybrid evolutionary multi-objective and SQP based procedure for constrained optimization
ISICA'07 Proceedings of the 2nd international conference on Advances in computation and intelligence
Self-adaptive mutations may lead to premature convergence
IEEE Transactions on Evolutionary Computation
A fast and elitist multiobjective genetic algorithm: NSGA-II
IEEE Transactions on Evolutionary Computation
Performance assessment of multiobjective optimizers: an analysis and review
IEEE Transactions on Evolutionary Computation
Reducing the run-time complexity of multiobjective EAs: The NSGA-II and other algorithms
IEEE Transactions on Evolutionary Computation
On the distribution of EMOA hypervolumes
LION'10 Proceedings of the 4th international conference on Learning and intelligent optimization
A taxonomy of online stopping criteria for multi-objective evolutionary algorithms
EMO'11 Proceedings of the 6th international conference on Evolutionary multi-criterion optimization
Hi-index | 0.00 |
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.