Reducing Local Optima in Single-Objective Problems by Multi-objectivization
EMO '01 Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization
Multiobjectivization by Decomposition of Scalar Cost Functions
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Investigations into the Effect of Multiobjectivization in Protein Structure Prediction
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
On the effects of adding objectives to plateau functions
IEEE Transactions on Evolutionary Computation
Searching under multi-evolutionary pressures
EMO'03 Proceedings of the 2nd international conference on Evolutionary multi-criterion optimization
Maximizing population diversity in single-objective optimization
Proceedings of the 13th annual conference on Genetic and evolutionary computation
A fast and elitist multiobjective genetic algorithm: NSGA-II
IEEE Transactions on Evolutionary Computation
Hi-index | 0.00 |
The idea of multiobjectivization is to reformulate a single-objective problem as a multiobjective one. In one of the scarce studies proposing this idea for problems in continuous domains, the distance to the closest neighbor (DCN) in the population of a multiobjective algorithm has been used as the additional (dynamic) second objective. As no comparison with other state-of-the-art single-objective optimizers has been presented for this idea, we have benchmarked two variants (with and without the second DCN objective) of the original NSGA-II algorithm using two different mutation operators on the noiseless BBOB'2013 testbed. It turns out that multiobjectivization helps for several of the 24 benchmark functions, but that, compared to the best algorithms from BBOB'2009, a significant performance loss is visible. Moreover, on some functions, the choice of the mutation operator has a stronger impact on the performance than whether multiobjectivization is employed or not.