A review of multiobjective test problems and a scalable test problem toolkit

  • Authors:
  • S. Huband;P. Hingston;L. Barone;L. While

  • Affiliations:
  • Edith Cowan Univ., Mount Lawley, WA;-;-;-

  • Venue:
  • IEEE Transactions on Evolutionary Computation
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

When attempting to better understand the strengths and weaknesses of an algorithm, it is important to have a strong understanding of the problem at hand. This is true for the field of multiobjective evolutionary algorithms (EAs) as it is for any other field. Many of the multiobjective test problems employed in the EA literature have not been rigorously analyzed, which makes it difficult to draw accurate conclusions about the strengths and weaknesses of the algorithms tested on them. In this paper, we systematically review and analyze many problems from the EA literature, each belonging to the important class of real-valued, unconstrained, multiobjective test problems. To support this, we first introduce a set of test problem criteria, which are in turn supported by a set of definitions. Our analysis of test problems highlights a number of areas requiring attention. Not only are many test problems poorly constructed but also the important class of nonseparable problems, particularly nonseparable multimodal problems, is poorly represented. Motivated by these findings, we present a flexible toolkit for constructing well-designed test problems. We also present empirical results demonstrating how the toolkit can be used to test an optimizer in ways that existing test suites do not