Communications of the ACM
A Recursion Theoretic Approach to Program Testing
IEEE Transactions on Software Engineering
Learning regular sets from queries and counterexamples
Information and Computation
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
Artificial Intelligence
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Testing by means of inductive program learning
ACM Transactions on Software Engineering and Methodology (TOSEM)
Using Vapnik-Chervonenkis dimension to analyze the testing complexity of program segments
Information and Computation
Approximate testing and its relationship to learning
Theoretical Computer Science
Assessing Test Data Adequacy through Program Inference
ACM Transactions on Programming Languages and Systems (TOPLAS)
Machine Learning
ICGI '98 Proceedings of the 4th International Colloquium on Grammatical Inference
Efficient incremental algorithms for dynamic detection of likely invariants
Proceedings of the 12th ACM SIGSOFT twelfth international symposium on Foundations of software engineering
The influence of organizational structure on software quality: an empirical case study
Proceedings of the 30th international conference on Software engineering
Strengthening Inferred Specifications using Search Based Testing
ICSTW '08 Proceedings of the 2008 IEEE International Conference on Software Testing Verification and Validation Workshop
Using machine learning to refine Category-Partition test specifications and test suites
Information and Software Technology
FM '09 Proceedings of the 2nd World Congress on Formal Methods
Iterative Refinement of Reverse-Engineered Models by Model-Based Testing
FM '09 Proceedings of the 2nd World Congress on Formal Methods
Grammatical Inference: Learning Automata and Grammars
Grammatical Inference: Learning Automata and Grammars
A framework for the competitive evaluation of model inference techniques
Proceedings of the First International Workshop on Model Inference In Testing
The practical assessment of test sets with inductive inference techniques
TAIC PART'10 Proceedings of the 5th international academic and industrial conference on Testing - practice and research techniques
Increasing functional coverage by inductive testing: a case study
ICTSS'10 Proceedings of the 22nd IFIP WG 6.1 international conference on Testing software and systems
An automated framework for software test oracle
Information and Software Technology
On the correspondence between conformance testing and regular inference
FASE'05 Proceedings of the 8th international conference, held as part of the joint European Conference on Theory and Practice of Software conference on Fundamental Approaches to Software Engineering
LearnLib: a library for automata learning and experimentation
FASE'06 Proceedings of the 9th international conference on Fundamental Approaches to Software Engineering
Model-Based testing and model inference
ISoLA'12 Proceedings of the 5th international conference on Leveraging Applications of Formal Methods, Verification and Validation: technologies for mastering change - Volume Part I
Hi-index | 0.00 |
Testing a black-box system without recourse to a specification is difficult, because there is no basis for estimating how many tests will be required, or to assess how complete a given test set is. Several researchers have noted that there is a duality between these testing problems and the problem of inductive inference (learning a model of a hidden system from a given set of examples). It is impossible to tell how many examples will be required to infer an accurate model, and there is no basis for telling how complete a given set of examples is. These issues have been addressed in the domain of inductive inference by developing statistical techniques, where the accuracy of an inferred model is subject to a tolerable degree of error. This paper explores the application of these techniques to assess test sets of black-box systems. It shows how they can be used to reason in a statistically justified manner about the number of tests required to fully exercise a system without a specification, and how to provide a valid adequacy measure for black-box test sets in an applied context.