A framework for the competitive evaluation of model inference techniques

  • Authors:
  • Neil Walkinshaw;Kirill Bogdanov;Christophe Damas;Bernard Lambeau;Pierre Dupont

  • Affiliations:
  • The University of Sheffield, Sheffield, UK;The University of Sheffield, Sheffield, UK;Université Catholique de Louvain (UCL), Louvain-la-Neuve, Belgium;Université Catholique de Louvain (UCL), Louvain-la-Neuve, Belgium;Université Catholique de Louvain (UCL), Louvain-la-Neuve, Belgium

  • Venue:
  • Proceedings of the First International Workshop on Model Inference In Testing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes the STAMINA competition1, which is designed to drive the evaluation and improvement of software model-inference approaches. To this end, the target models have certain characteristics that tend to appear in software-models; they have large alphabets, and states are not evenly connected by transitions (as has been the case in previous similar competitions). The paper describes the set-up of the competition that extends previous similar competitions in the field of regular grammar inference. However, this competition focusses on target models that are characteristic of software systems, and features a suitably adapted protocol for the generation of training and testing samples. Besides providing details of the competition itself, it also discusses how outcomes from the competition will be used to gain broader insights into the relative accuracy and efficiency of competing techniques.