On Benchmarking Constraint Logic Programming Platforms. Response to Fernandez and Hill's “A Comparative Study of Eight Constraint Programming Languages over the Boolean and Finite Domains”

  • Authors:
  • Mark Wallace;Joachim Schimpf;Kish Shen;Warwick Harvey

  • Affiliations:
  • IC-Parc, William Penney Laboratory, Imperial College, London SW7 2AZ UK m.wallace@icparc.ic.ac.uk;IC-Parc, William Penney Laboratory, Imperial College, London SW7 2AZ UK j.schimpf@icparc.ic.ac.uk;IC-Parc, William Penney Laboratory, Imperial College, London SW7 2AZ UK k.shen@icparc.ic.ac.uk;IC-Parc, William Penney Laboratory, Imperial College, London SW7 2AZ UK w.harvey@icparc.ic.ac.uk

  • Venue:
  • Constraints
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

The comparative study published in this journal by Fernandez and Hill benchmarked some constraint programming systems on a set of well-known puzzles. The current article examines the positive and negative aspects of this kind of benchmarking.The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems.A benchmarking exercise should cover a broad set of representative problems and a broad set of programming constructs. This can be achieved using two kinds of benchmarking: Applications Benchmarking and Unit Testing. The article reports the authors' experiences with these two kinds of benchmarking in the context of the CHIC2 Esprit project. The benchmarks were used to unit test different features of the CLP system ECLiPSe and to compare application development with different high-level constraint platforms.The conclusion is that, in deciding which system to use on a new application, it is less useful to compare standard features of CLP systems, than to compare their relevant functionalities.