Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Training a 3-Node Neural Network is NP-Complete
Machine Learning: From Theory to Applications - Cooperative Research at Siemens and MIT
Proceedings of the 31st ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Non-linear loop invariant generation using Gröbner bases
Proceedings of the 31st ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Computing polynomial program invariants
Information Processing Letters
An interpolating theorem prover
Theoretical Computer Science - Tools and algorithms for the construction and analysis of systems (TACAS 2004)
SYNERGY: a new algorithm for property checking
Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering
Generating all polynomial invariants in simple loops
Journal of Symbolic Computation
The Daikon system for dynamic detection of likely invariants
Science of Computer Programming
Program analysis as constraint solving
Proceedings of the 2008 ACM SIGPLAN conference on Programming language design and implementation
Control-flow refinement and progress invariants for bound analysis
Proceedings of the 2009 ACM SIGPLAN conference on Programming language design and implementation
Refining the control structure of loops using static analysis
EMSOFT '09 Proceedings of the seventh ACM international conference on Embedded software
Constraint solving for interpolation
VMCAI'07 Proceedings of the 8th international conference on Verification, model checking, and abstract interpretation
TACAS'08/ETAPS'08 Proceedings of the Theory and practice of software, 14th international conference on Tools and algorithms for the construction and analysis of systems
Automatically refining abstract interpretations
TACAS'08/ETAPS'08 Proceedings of the Theory and practice of software, 14th international conference on Tools and algorithms for the construction and analysis of systems
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Simplifying loop invariant generation using splitter predicates
CAV'11 Proceedings of the 23rd international conference on Computer aided verification
Craig interpolation in the presence of non-linear constraints
FORMATS'11 Proceedings of the 9th international conference on Formal modeling and analysis of timed systems
A practical and complete approach to predicate refinement
TACAS'06 Proceedings of the 12th international conference on Tools and Algorithms for the Construction and Analysis of Systems
A data driven approach for algebraic loop invariants
ESOP'13 Proceedings of the 22nd European conference on Programming Languages and Systems
Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering
Inductive invariant generation via abductive inference
Proceedings of the 2013 ACM SIGPLAN international conference on Object oriented programming systems languages & applications
Automated reasoning, fast and slow
CADE'13 Proceedings of the 24th international conference on Automated Deduction
CAV'13 Proceedings of the 25th international conference on Computer Aided Verification
Learning universally quantified invariants of linear data structures
CAV'13 Proceedings of the 25th international conference on Computer Aided Verification
Hi-index | 0.00 |
We show how interpolants can be viewed as classifiers in supervised machine learning. This view has several advantages: First, we are able to use off-the-shelf classification techniques, in particular support vector machines (SVMs), for interpolation. Second, we show that SVMs can find relevant predicates for a number of benchmarks. Since classification algorithms are predictive, the interpolants computed via classification are likely to be invariants. Finally, the machine learning view also enables us to handle superficial non-linearities. Even if the underlying problem structure is linear, the symbolic constraints can give an impression that we are solving a non-linear problem. Since learning algorithms try to mine the underlying structure directly, we can discover the linear structure for such problems. We demonstrate the feasibility of our approach via experiments over benchmarks from various papers on program verification.