Benchmarking a model checker for algorithmic improvements and tuning for performance

  • Authors:
  • Gianpiero Cabodi;Sergio Nocco;Stefano Quer

  • Affiliations:
  • Dip. di Automatica e Informatica, Politecnico di Torino, Turin, Italy;Dip. di Automatica e Informatica, Politecnico di Torino, Turin, Italy;Dip. di Automatica e Informatica, Politecnico di Torino, Turin, Italy

  • Venue:
  • Formal Methods in System Design
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a portfolio-based approach for model checking, i.e., an approach in which several model checking engines are orchestrated to reach the best possible performance on a broad and real set of designs. Model checking algorithms are evaluated through experiments, and experimental data inspire package tuning, as well as new algorithmic features and methodologies. This approach, albeit similar to several industrial and academic experiences, and already applied in other domains, is somehow new to the model checking field. Its contributions lie in the description of how we: (1) characterize and classify benchmarks in a dynamic way, throughout experimental runs, (2) relate model checking problems to algorithms and engines, (3) introduce a dynamic tuning of sub-engines, exploiting an on-the-fly performance analysis, (4) record results of different approaches, and sort out heuristics to target different classes of problems. We provide a detailed description of the experiments performed in preparation of the Model Checking Competition 2010, where PdTRAV, our academic verification tool, won the UNSAT division, while ranking second in the OVERALL category.