Language-Theoretic abstraction refinement

  • Authors:
  • Zhenyue Long;Georgel Calin;Rupak Majumdar;Roland Meyer

  • Affiliations:
  • Max Planck Institute for Software Systems, Germany and State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China and Graduate University, Chinese Academy ...;Department of Computer Science, University of Kaiserslautern, Germany;Max Planck Institute for Software Systems, Germany;Department of Computer Science, University of Kaiserslautern, Germany

  • Venue:
  • FASE'12 Proceedings of the 15th international conference on Fundamental Approaches to Software Engineering
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We give a language-theoretic counterexample-guided abstraction refinement (CEGAR) algorithm for the safety verification of recursive multi-threaded programs. First, we reduce safety verification to the (undecidable) language emptiness problem for the intersection of context-free languages. Initially, our CEGAR procedure overapproximates the intersection by a context-free language. If the overapproximation is empty, we declare the system safe. Otherwise, we compute a bounded language from the overapproximation and check emptiness for the intersection of the context free languages and the bounded language (which is decidable). If the intersection is non-empty, we report a bug. If empty, we refine the overapproximation by removing the bounded language and try again. The key idea of the CEGAR loop is the language-theoretic view: different strategies to get regular overapproximations and bounded approximations of the intersection give different implementations. We give concrete algorithms to approximate context-free languages using regular languages and to generate bounded languages representing a family of counterexamples. We have implemented our algorithms and provide an experimental comparison on various choices for the regular overapproximation and the bounded underapproximation.