Runtime verification of .NET contracts
Journal of Systems and Software - Special issue on: Component-based software engineering
Behavior Capture and Test for Verifying Evolving Component-Based Systems
Proceedings of the 26th International Conference on Software Engineering
Java(TM) Language Specification, The (3rd Edition) (Java (Addison-Wesley))
Java(TM) Language Specification, The (3rd Edition) (Java (Addison-Wesley))
Practical Model-Based Testing: A Tools Approach
Practical Model-Based Testing: A Tools Approach
Network protocol system monitoring: a formal approach with passive testing
IEEE/ACM Transactions on Networking (TON)
Model-Based Software Testing and Analysis with C#
Model-Based Software Testing and Analysis with C#
Model-Based Quality Assurance of Windows Protocol Documentation
ICST '08 Proceedings of the 2008 International Conference on Software Testing, Verification, and Validation
Seattle: a platform for educational cloud computing
Proceedings of the 40th ACM technical symposium on Computer science education
Reverse engineering models from traces to validate distributed systems: an industrial case study
ECMDA-FA'07 Proceedings of the 3rd European conference on Model driven architecture-foundations and applications
Hi-index | 0.00 |
Despite widespread OS, network, and hardware heterogeneity, there has been a lack of research into quantifying and improving portability of a programming environment. We have constructed a distributed testbed called Seattle built on a platform-independent programming API that is implemented on different operating systems and architectures. Our goal is to show that applications written to our API will be portable. In this work, we use an instrumented version of the programming environment for testing purposes. The instrumentation allows us to gather traces of actual program behavior from a running implementation. These traces can be used across different versions of the implementation exactly as if they were test cases generated offline from a model program, so we can commence testing using model based testing tools, without constructing a model program. Such offline testing is only effective in scenarios where traces are expected to be reproducible (deterministic). Where reproducibility is not expected, for instance due to nondeterminism in the network environment, we must resort to on-the-fly testing, which does require a model program. To validate this model program, we can use the recorded traces of actual behavior. Validating with captured traces should provide greater coverage than we could achieve by validating only with traces constructed a priori.