Better GP benchmarks: community survey results and proposals

  • Authors:
  • David R. White;James Mcdermott;Mauro Castelli;Luca Manzoni;Brian W. Goldman;Gabriel Kronberger;Wojciech Jaśkowski;Una-May O'Reilly;Sean Luke

  • Affiliations:
  • School of Computing Science, University of Glasgow, Glasgow, UK;School of Business, University College Dublin, Dublin, Ireland;Instituto Superior de Estatística e Gestão de Informação (ISEGI), Universidade Nova de Lisboa, Lisbon, Portugal;Dipartimento di Informatica, Sistemistica e Comunicazione, University of Milano-Bicocca, Milan, Italy;BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, USA;University of Applied Sciences Upper Austria, Linz, Austria;Institute of Computing Science, Poznan University of Technology, Poznan, Poland;CSAIL, Massachusetts Institute of Technology, Cambridge, USA;Department of Computer Science, George Mason University, Fairfax, USA

  • Venue:
  • Genetic Programming and Evolvable Machines
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present the results of a community survey regarding genetic programming benchmark practices. Analysis shows broad consensus that improvement is needed in problem selection and experimental rigor. While views expressed in the survey dissuade us from proposing a large-scale benchmark suite, we find community support for creating a "blacklist" of problems which are in common use but have important flaws, and whose use should therefore be discouraged. We propose a set of possible replacement problems.