Dynamically Discovering Likely Program Invariants to Support Program Evolution
IEEE Transactions on Software Engineering - Special issue on 1999 international conference on software engineering
Pointer analysis: haven't we solved this problem yet?
PASTE '01 Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering
Tracking down software bugs using automatic anomaly detection
Proceedings of the 24th International Conference on Software Engineering
DART: directed automated random testing
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation
CUTE: a concolic unit testing engine for C
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
Evaluating and tuning a static analysis to find null pointer bugs
PASTE '05 Proceedings of the 6th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering
IEEE Transactions on Software Engineering
Program comprehension as fact finding
Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Tracking bad apples: reporting the origin of null and undefined value errors
Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications
DySy: dynamic symbolic execution for invariant inference
Proceedings of the 30th international conference on Software engineering
Path projection for user-centered static analysis tools
Proceedings of the 8th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering
Merlin: specification inference for explicit information flow problems
Proceedings of the 2009 ACM SIGPLAN conference on Programming language design and implementation
Does distributed development affect software quality? An empirical case study of Windows Vista
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
The RoadRunner Dynamic Analysis Framework for Concurrent Programs
Proceedings of the 9th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering
KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs
OSDI'08 Proceedings of the 8th USENIX conference on Operating systems design and implementation
A dynamic evaluation of the precision of static heap abstractions
Proceedings of the ACM international conference on Object oriented programming systems languages and applications
Hi-index | 0.00 |
A static analysis design is sufficient if it can prove the property of interest with an acceptable number of false alarms. Ultimately, the only way to confirm that an analysis design is sufficient is to implement it and run it on real-world programs. If the evaluation shows that the design is insufficient, the designer must return to the drawing board and repeat the process--wasting expensive implementation effort over and over again. In this paper, we make the observation that there is a minimal range of code needed to prove a property of interest under an ideal static analysis; we call such a range of code a validation scope. Armed with this observation, we create a dynamic measurement framework that quantifies validation scopes and thus enables designers to rule out insufficient designs at lower cost. A novel attribute of our framework is the ability to model aspects of static reasoning using dynamic execution measurements. To evaluate the flexibility of our framework, we instantiate it on an example property--null dereference errors--and measure validation scopes on real-world programs. We use a broad range of metrics that capture the difficulty of analyzing programs along varying dimensions. We also examine how validation scopes evolve as developers fix null dereference errors and as code matures. We find that bug fixes shorten validation scopes, that longer validation scopes are more likely to be buggy, and that overall validation scopes are remarkably stable as programs evolve.