Model-based testing in practice
Proceedings of the 21st international conference on Software engineering
A Test Generation Strategy for Pairwise Testing
IEEE Transactions on Software Engineering
Automated Software Engineering
Using benchmarking to advance research: a challenge to software engineering
Proceedings of the 25th International Conference on Software Engineering
Concurrent Bug Patterns and How to Test Them
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Benchmark and Framework for Encouraging Research on Multi-Threaded Testing Tools
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
General Test Result Checking with Log File Analysis
IEEE Transactions on Software Engineering
Bogor: an extensible and highly-modular software model checking framework
Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering
Software Fault Interactions and Implications for Software Testing
IEEE Transactions on Software Engineering
Experimental Evaluation of Verification and Validation Tools on Martian Rover Software
Formal Methods in System Design
ACM SIGPLAN Notices
A Controlled Experiment Assessing Test Case Prioritization Techniques via Mutation Faults
ICSM '05 Proceedings of the 21st IEEE International Conference on Software Maintenance
Effective static race detection for Java
Proceedings of the 2006 ACM SIGPLAN conference on Programming language design and implementation
Controlling factors in evaluating path-sensitive error detection techniques
Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering
Conditional must not aliasing for static race detection
Proceedings of the 34th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Concurrency and Computation: Practice & Experience - Parallel and Distributed Systems: Testing and Debugging (PADTAD)
Mutation Operators for Concurrent Java (J2SE 5.0)
MUTATION '06 Proceedings of the Second Workshop on Mutation Analysis
Comparative Assessment of Testing and Model Checking Using Program Mutation
TAICPART-MUTATION '07 Proceedings of the Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION
Experience with a Concurrency Bugs Benchmark
ICSTW '08 Proceedings of the 2008 IEEE International Conference on Software Testing Verification and Validation Workshop
Effective static deadlock detection
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
Multithreaded java program test generation
IBM Systems Journal
Combinatorial test design in practice
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
Finding and reproducing Heisenbugs in concurrent programs
OSDI'08 Proceedings of the 8th USENIX conference on Operating systems design and implementation
Effective Static Analysis to Find Concurrency Bugs in Java
SCAM '10 Proceedings of the 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation
Using clone detection to identify bugs in concurrent software
ICSM '10 Proceedings of the 2010 IEEE International Conference on Software Maintenance
Using binary decision diagrams for combinatorial test design
Proceedings of the 2011 International Symposium on Software Testing and Analysis
Hi-index | 0.00 |
Many different techniques for testing and analyzing concurrency programs have been proposed in the literature. Currently, it is difficult to assess the fitness of a particular concurrency bug detection method and to compare it to other bug detection methods due to a lack of unbiased data that is representative of the kinds of concurrency programs that are used in practice. To address this problem we propose a new benchmark of concurrent Java programs that is constructed using combinatorial test design. In this paper we present our combinatorial model for creating a benchmark, we propose a new concurrency benchmark and we discuses the relationship between our new benchmarks and existing benchmarks. Specific combinations of the model parameters define different interleaving spaces, thus differentiating between different test tools.