A pareto following variation operator for fast-converging multiobjective evolutionary algorithms

  • Authors:
  • A.K.M. Khaled Ahsan Talukder;Michael Kirley;Rajkumar Buyya

  • Affiliations:
  • The University of Melbourne, Melbourne, Australia;The University of Melbourne, Melbourne, Australia;The University of Melbourne, Melbourne, Australia

  • Venue:
  • Proceedings of the 10th annual conference on Genetic and evolutionary computation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the major difficulties when applying Multiobjective Evolutionary Algorithms (MOEA) to real world problems is the large number of objective function evaluations. Approximate (or surrogate) methods offer the possibility of reducing the number of evaluations, without reducing solution quality. Artificial Neural Network (ANN) based models are one approach that have been used to approximate the future front from the current available fronts with acceptable accuracy levels. However, the associated computational costs limit their effectiveness. In this work, we introduce a simple approach that has comparatively smaller computational cost and we have developed this model as a variation operator that can be used in any kind of multiobjective optimizer. When designing this model, we have considered the whole search procedure as a dynamic system that takes available objective values in current front as input and generates approximated design variables for the next front as output. Initial simulation experiments have produced encouraging results in comparison to NSGA-II. Our motivation was to increase the speed of the hosting optimizer. We have compared the performance of the algorithm with respect to the total number of function evaluation and Hypervolume metric. This variation operator has worst case complexity of O(nkN3), where N is the population size, n and k is the number of design variables and objectives respectively.