Faults on its sleeve: amplifying software reliability testing
ISSTA '93 Proceedings of the 1993 ACM SIGSOFT international symposium on Software testing and analysis
IEEE Transactions on Software Engineering - Special issue on formal methods in software practice
Evaluating Deadlock Detection Methods for Concurrent Software
IEEE Transactions on Software Engineering
Protocol Verification as a Hardware Design Aid
ICCD '92 Proceedings of the 1991 IEEE International Conference on Computer Design on VLSI in Computer & Processors
Construction of Abstract State Graphs with PVS
CAV '97 Proceedings of the 9th International Conference on Computer Aided Verification
ASE '00 Proceedings of the 15th IEEE international conference on Automated software engineering
Concurrent Bug Patterns and How to Test Them
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Benchmark and Framework for Encouraging Research on Multi-Threaded Testing Tools
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Bogor: an extensible and highly-modular software model checking framework
Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering
Formal Methods in System Design
Heuristic-guided counterexample search in FLAVERS
Proceedings of the 12th ACM SIGSOFT twelfth international symposium on Foundations of software engineering
CMC: a pragmatic approach to model checking real code
OSDI '02 Proceedings of the 5th symposium on Operating systems design and implementationCopyright restrictions prevent ACM from being able to make the PDFs for this conference available for downloading
Towards a framework and a benchmark for testing tools for multi-threaded programs: Research Articles
Concurrency and Computation: Practice & Experience - Parallel and Distributed Systems: Testing and Debugging (PADTAD)
Model checking large network protocol implementations
NSDI'04 Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1
Using model checking to find serious file system errors
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Evaluating the effectiveness of slicing for model reduction of concurrent object-oriented programs
TACAS'06 Proceedings of the 12th international conference on Tools and Algorithms for the Construction and Analysis of Systems
Parallel Randomized State-Space Search
ICSE '07 Proceedings of the 29th international conference on Software Engineering
Formal Software Analysis Emerging Trends in Software Model Checking
FOSE '07 2007 Future of Software Engineering
Random testing and model checking: building a common framework for nondeterministic exploration
WODA '08 Proceedings of the 2008 international workshop on dynamic analysis: held in conjunction with the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2008)
Complementarity of Error Detection Techniques
Electronic Notes in Theoretical Computer Science (ENTCS)
Guided model checking for programs with polymorphism
Proceedings of the 2009 ACM SIGPLAN workshop on Partial evaluation and program manipulation
A Meta Heuristic for Effectively Detecting Concurrency Errors
HVC '08 Proceedings of the 4th International Haifa Verification Conference on Hardware and Software: Verification and Testing
Saturation-based testing of concurrent programs
Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Clash of the Titans: tools and techniques for hunting bugs in concurrent programs
Proceedings of the 7th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging
Generating counter-examples through randomized guided search
Proceedings of the 14th international SPIN conference on Model checking software
Extending model checking with dynamic analysis
VMCAI'08 Proceedings of the 9th international conference on Verification, model checking, and abstract interpretation
Change-aware preemption prioritization
Proceedings of the 2011 International Symposium on Software Testing and Analysis
Randomized backtracking in state space traversal
Proceedings of the 18th international SPIN conference on Model checking software
Run your research: on the effectiveness of lightweight mechanization
POPL '12 Proceedings of the 39th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Evaluating ordering heuristics for dynamic partial-order reduction techniques
FASE'10 Proceedings of the 13th international conference on Fundamental Approaches to Software Engineering
(Quickly) testing the tester via path coverage
WODA '09 Proceedings of the Seventh International Workshop on Dynamic Analysis
Attacking the dimensionality problem of parameterized systems via bounded reachability graphs
FSEN'11 Proceedings of the 4th IPM international conference on Fundamentals of Software Engineering
Proceedings of the 2012 Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging
BALLERINA: automatic generation and clustering of efficient random unit tests for multithreaded code
Proceedings of the 34th International Conference on Software Engineering
Hi-index | 0.00 |
Recent advances in static program analysis have made it possible to detect errors in applications that have been thoroughly tested and are in wide-spread use. The ability to find errors that have eluded traditional validation methods is due to the development and combination of sophisticated algorithmic techniques that are embedded in the implementations of analysis tools. Evaluating new analysis techniques is typically performed by running an analysis tool on a collection of subject programs, perhaps enabling and disabling a given technique in different runs. While seemingly sensible, this approach runs the risk of attributing improvements in the cost-effectiveness of the analysis to the technique under consideration, when those improvements may actually be due to details of analysis tool implementations that are uncontrolled during evaluation.In this paper, we focus on the specific class of path-sensitive error detection techniques and identify several factors that can significantly influence the cost of analysis. We show, through careful empirical studies, that the influence of these factors is sufficiently large that, if left uncontrolled, they may lead researchers to improperly attribute improvements in analysis cost and effectiveness. We make several recommendations as to how the influence of these factors can be mitigated when evaluating techniques.