ANNA: a language for annotating Ada programs
ANNA: a language for annotating Ada programs
Software testing based on formal specifications: a theory and a tool
Software Engineering Journal
Specification-based test oracles for reactive systems
ICSE '92 Proceedings of the 14th international conference on Software engineering
Test template framework: a specification-based testing case study
ISSTA '93 Proceedings of the 1993 ACM SIGSOFT international symposium on Software testing and analysis
Experience with Formal Methods in Critical Systems
IEEE Software
Data Abstraction, Implementation, Specification, and Testing
ACM Transactions on Programming Languages and Systems (TOPLAS)
A generalized control structure and its formal definition
Communications of the ACM
Communications of the ACM
Predicate Logic for Software Engineering
IEEE Transactions on Software Engineering
Precise Documentation of Well-Structured Programs
IEEE Transactions on Software Engineering
Automated test oracles for GUIs
SIGSOFT '00/FSE-8 Proceedings of the 8th ACM SIGSOFT international symposium on Foundations of software engineering: twenty-first century applications
Theory of software reliability based on components
ICSE '01 Proceedings of the 23rd International Conference on Software Engineering
A Simple and Practical Approach to Unit Testing: The JML and JUnit Way
ECOOP '02 Proceedings of the 16th European Conference on Object-Oriented Programming
Using Transient/Persistent Errors to Develop Automated Test Oracles for Event-Driven Software
Proceedings of the 19th IEEE international conference on Automated software engineering
When only random testing will do
Proceedings of the 1st international workshop on Random testing
Designing and comparing automated test oracles for GUI-based software applications
ACM Transactions on Software Engineering and Methodology (TOSEM)
Smart: a tool for application reference testing
Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering
Design and analysis of GUI test-case prioritization using weight-based methods
Journal of Systems and Software
The future of library specification
Proceedings of the FSE/SDP workshop on Future of software engineering research
Neural networks based automated test oracle for software testing
ICONIP'06 Proceedings of the 13th international conference on Neural information processing - Volume Part III
Specifying a testing oracle for train stations
Proceedings of the 8th International Workshop on Model-Driven Engineering, Verification and Validation
Attribute reduction based expected outputs generation for statistical software testing
RSKT'06 Proceedings of the First international conference on Rough Sets and Knowledge Technology
Augmenting automatically generated unit-test suites with regression oracle checking
ECOOP'06 Proceedings of the 20th European conference on Object-Oriented Programming
Artificial neural networks as multi-networks automated test oracle
Automated Software Engineering
Specifying a testing oracle for train stations --- going beyond with product line technology
MODELS'11 Proceedings of the 2011th international conference on Models in Software Engineering
Hi-index | 0.00 |
A fundamental assumption of software testing is that there is some mechanism, an oracle, that will determine whether or not the results of a test execution are correct. In practice this is often done by comparing the output, either automatically or manually, to some pre-calculated, presumably correct, output [17]. However, if the program is formally documented it is possible to use the specification to determine the success or failure of a test execution, as in [1], for example. This paper discusses ongoing work to produce a tool that will generate a test oracle from formal program documentation.In [9], [10] and [11] Parnas et al. advocate the use of a relational model for documenting the intended behaviour of programs. In this method, tabular expressions are used to improve readability so that formal documentation can replace conventional documentation. Relations are described by giving their characteristic predicate in terms of the values of concrete program variables. This documentation method has the advantage that the characteristic predicate can be used as the test oracle -- it simply must be evaluated for each test execution (input & output) to assign pass or fail. In contrast to [1], this paper discusses the testing of individual programs, not objects as used in [1]. Consequently, the method works with program documentation, written in terms of the concrete variables, and no representation function need be supplied. Documentation in this form, and the corresponding oracle, are illustrated by an example.Finally, some of the implications of generating test oracles from relational specifications are discussed.