Numerical Optimization of Computer Models
Numerical Optimization of Computer Models
Self-Organizing Maps
A Variant of Evolution Strategies for Vector Optimization
PPSN I Proceedings of the 1st Workshop on Parallel Problem Solving from Nature
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Multiobjective Evolutionary Algorithms: Analyzing the State-of-the-Art
Evolutionary Computation
Multiobjective evolutionary algorithms: a comparative case studyand the strength Pareto approach
IEEE Transactions on Evolutionary Computation
Inside a predator-prey model for multi-objective optimization: a second study
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Covariance Matrix Adaptation for Multi-objective Optimization
Evolutionary Computation
An improved Non-dominated Sorting Genetic Algorithm-II (ANSGA-II) with adaptable parameters
International Journal of Intelligent Systems Technologies and Applications
Designing multi-objective variation operators using a predator-prey approach
EMO'07 Proceedings of the 4th international conference on Evolutionary multi-criterion optimization
Graph partitioning by multi-objective real-valued metaheuristics: A comparative study
Applied Soft Computing
Exploiting comparative studies using criteria: generating knowledge from an analyst's perspective
EMO'05 Proceedings of the Third international conference on Evolutionary Multi-Criterion Optimization
Hi-index | 0.00 |
Evolutionary Algorithms are a standard tool for multi-objective optimization that are able to approximate the Pareto front in a single optimization run. However, for some selection operators, the algorithm stagnates at a certain distance from the Pareto front without convergence for further iterations. We analyze this observation for different multi-objective selection operators. We derive a simple analytical estimate of the stagnation distance for several selection operators, that use the dominance criterion for the fitness assignment. Two of the examined operators are shown to converge with arbitrary precision to the Pareto front. We exploit this property and propose a novel algorithm to increase their convergence speed by introducing suitable self-adaptive mutation. This adaptive mutation takes into account the distance to the Pareto front. All algorithms are analyzed on a 2- and 3-objective test function.