Multi-objective equivalent random search

  • Authors:
  • Evan J. Hughes

  • Affiliations:
  • Department of Aerospace, Power and Sensors, Cranfield University, Shrivenham, Swindon, Wiltshire, England

  • Venue:
  • PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces a new metric vector for assessing the performance of different multi-objective algorithms, relative to the range of performance expected from a random search. The metric requires an ensemble of repeated trials to be performed, reducing the chance of overly favourable results. The random search baseline for the function-under-test may be either analytic, or created from a Monte-Carlo process: thus the metric is repeatable and accurate. The metric allows both the median and worst performance of different algorithms to be compared directly, and scales well with high-dimensional many-objective problems. The metric quantifies and is sensitive to the distance of the solutions to the Pareto set, the distribution of points across the set, and the repeatability of the trials. Both the Monte-Carlo and closed form analysis methods will provide accurate analytic confidence intervals on the observed results.