An abstraction-refinement approach to verification of artificial neural networks

  • Authors:
  • Luca Pulina;Armando Tacchella

  • Affiliations:
  • DIST, Università di Genova, Genova, Italy;DIST, Università di Genova, Genova, Italy

  • Venue:
  • CAV'10 Proceedings of the 22nd international conference on Computer Aided Verification
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A key problem in the adoption of artificial neural networks in safety-related applications is that misbehaviors can be hardly ruled out with traditional analytical or probabilistic techniques In this paper we focus on specific networks known as Multi-Layer Perceptrons (MLPs), and we propose a solution to verify their safety using abstractions to Boolean combinations of linear arithmetic constraints We show that our abstractions are consistent, i.e., whenever the abstract MLP is declared to be safe, the same holds for the concrete one Spurious counterexamples, on the other hand, trigger refinements and can be leveraged to automate the correction of misbehaviors We describe an implementation of our approach based on the HySAT solver, detailing the abstraction-refinement process and the automated correction strategy Finally, we present experimental results confirming the feasibility of our approach on a realistic case study.