Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Machine Learning
Stochastic method for the solution of unconstrained vector optimization problems
Journal of Optimization Theory and Applications
Multiple Objective Optimization with Vector Evaluated Genetic Algorithms
Proceedings of the 1st International Conference on Genetic Algorithms
Why Quality Assessment Of Multiobjective Optimizers Is Difficult
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Comparison of Multiobjective Evolutionary Algorithms: Empirical Results
Evolutionary Computation
Multi-objective genetic algorithms: Problem difficulties and construction of test problems
Evolutionary Computation
Fda -a scalable evolutionary algorithm for the optimization of additively decomposed functions
Evolutionary Computation
Effective use of directional information in multi-objective evolutionary computation
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartI
The naive MIDEA: a baseline multi-objective EA
EMO'05 Proceedings of the Third international conference on Evolutionary Multi-Criterion Optimization
The balance between proximity and diversity in multiobjective evolutionary algorithms
IEEE Transactions on Evolutionary Computation
IEEE Transactions on Evolutionary Computation
Combining gradient techniques for numerical multi-objective evolutionary optimization
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Local search for multiobjective function optimization: pareto descent method
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Proceedings of the 8th annual conference on Genetic and evolutionary computation
A new memetic strategy for the numerical treatment of multi-objective optimization problems
Proceedings of the 10th annual conference on Genetic and evolutionary computation
A pareto following variation operator for fast-converging multiobjective evolutionary algorithms
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Gradient Based Stochastic Mutation Operators in Evolutionary Multi-objective Optimization
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part I
Using gradient-based information to deal with scalability in multi-objective evolutionary algorithms
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Computing gap free pareto front approximations with stochastic search algorithms
Evolutionary Computation
On gradient based local search methods in unconstrained evolutionary multi-objective optimization
EMO'07 Proceedings of the 4th international conference on Evolutionary multi-criterion optimization
HCS: a new local search strategy for memetic multiobjective evolutionary algorithms
IEEE Transactions on Evolutionary Computation
New challenges for memetic algorithms on continuous multi-objective problems
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Using gradient information for multi-objective problems in the evolutionary context
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Evolving policies for multi-reward partially observable markov decision processes (MR-POMDPs)
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Various multi--objective evolutionary algorithms (MOEAs) have obtained promising results on various numerical multi--objective optimization problems. The combination with gradient--based local search operators has however been limited to only a few studies. In the single--objective case it is known that the additional use of gradient information can be beneficial. In this paper we provide an analytical parametric description of the set of all non--dominated (i.e. most promising) directions in which a solution can be moved such that its objectives either improve or remain the same. Moreover, the parameters describing this set can be computed efficiently using only the gradients of the individual objectives. We use this result to hybridize an existing MOEA with a local search operator that moves a solution in a randomly chosen non--dominated improving direction. We test the resulting algorithm on a few well--known benchmark problems and compare the results with the same MOEA without local search and the same MOEA with gradient--based techniques that use only one objective at a time. The results indicate that exploiting gradient information based on the non--dominated improving directions is superior to using the gradients of the objectives separately and that it can furthermore improve the result of MOEAs in which no local search is used, given enough evaluations.