Testing first-order logic axioms in program verification

  • Authors:
  • Ki Yung Ahn;Ewen Denney

  • Affiliations:
  • Portland State University, Portland, OR and Mission Critical Technologies, Inc., NASA Ames Research Center, Moffett Field, CA;Stinger Ghaffarian Technologies, Inc., NASA Ames Research Center, Moffett Field, CA

  • Venue:
  • TAP'10 Proceedings of the 4th international conference on Tests and proofs
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Program verification systems based on automated theorem provers rely on user-provided axioms in order to verify domain-specific properties of code. However, formulating axioms correctly (that is, formalizing properties of an intended mathematical interpretation) is non-trivial in practice, and avoiding or even detecting unsoundness can sometimes be difficult to achieve. Moreover, speculating soundness of axioms based on the output of the provers themselves is not easy since they do not typically give counterexamples. We adopt the idea of model-based testing to aid axiom authors in discovering errors in axiomatizations. To test the validity of axioms, users define a computational model of the axiomatized logic by giving interpretations to the function symbols and constants in a simple declarative programming language. We have developed an axiom testing framework that helps automate model definition and test generation using off-the-shelf tools for meta-programming, property-based random testing, and constraint solving. We have experimented with our tool to test the axioms used in AUtoCERT, a program verification system that has been applied to verify aerospace flight code using a first-order axiomatization of navigational concepts, and were able to find counterexamples for a number of axioms.