Interprocedural slicing using dependence graphs
PLDI '88 Proceedings of the ACM SIGPLAN 1988 conference on Programming Language design and Implementation
Identifying the semantic and textual differences between two versions of a program
PLDI '90 Proceedings of the ACM SIGPLAN 1990 conference on Programming language design and implementation
Efficient comparison of program slices
Acta Informatica
An Implementation of and Experiment with Semantic Differencing
ICSM '01 Proceedings of the IEEE International Conference on Software Maintenance (ICSM'01)
Improving Visual Impact Analysis
ICSM '98 Proceedings of the International Conference on Software Maintenance
Chianti: a tool for change impact analysis of java programs
OOPSLA '04 Proceedings of the 19th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
Which warnings should I fix first?
Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Using checklists to review static analysis warnings
Proceedings of the 2nd International Workshop on Defects in Large Software Systems: Held in conjunction with the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2009)
Hi-index | 0.00 |
Model checking and static analysis are two techniques widely used to detect property violations in code. However, for property checking on large software systems, only static analysis tools are applied due to their ability to scale up in spite of being imprecise in comparison to model checking tools. All reported violations are manually examined to separate out large number of false positives. This is effort intensive, time consuming and requires reasonable understanding of the system. In this paper, we present a technique that reduces the number of reported false positives by exploiting the incremental nature of large software system development. This is achieved by performing an impact analysis of changes introduced in the current version and suppressing the false positives that are immune to these changes. The paper also presents our experience in applying this technique on a large embedded software system, where we found an 80% reduction in the overall false positives reported.