Exploiting gradient information in numerical multi--objective evolutionary optimization

  • Authors:
  • Peter A. N. Bosman;Edwin D. de Jong

  • Affiliations:
  • Centre for Mathematics and Computer Science, Amsterdam, The Netherlands;Utrecht University, Utrecht, The Netherlands

  • Venue:
  • GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Various multi--objective evolutionary algorithms (MOEAs) have obtained promising results on various numerical multi--objective optimization problems. The combination with gradient--based local search operators has however been limited to only a few studies. In the single--objective case it is known that the additional use of gradient information can be beneficial. In this paper we provide an analytical parametric description of the set of all non--dominated (i.e. most promising) directions in which a solution can be moved such that its objectives either improve or remain the same. Moreover, the parameters describing this set can be computed efficiently using only the gradients of the individual objectives. We use this result to hybridize an existing MOEA with a local search operator that moves a solution in a randomly chosen non--dominated improving direction. We test the resulting algorithm on a few well--known benchmark problems and compare the results with the same MOEA without local search and the same MOEA with gradient--based techniques that use only one objective at a time. The results indicate that exploiting gradient information based on the non--dominated improving directions is superior to using the gradients of the objectives separately and that it can furthermore improve the result of MOEAs in which no local search is used, given enough evaluations.