The category-partition method for specifying and generating fuctional tests
Communications of the ACM
ICSE '01 Proceedings of the 23rd International Conference on Software Engineering
Finding failures by cluster analysis of execution profiles
ICSE '01 Proceedings of the 23rd International Conference on Software Engineering
POPL '02 Proceedings of the 29th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Automatic extraction of object-oriented component interfaces
ISSTA '02 Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis
Tracking down software bugs using automatic anomaly detection
Proceedings of the 24th International Conference on Software Engineering
Semantic anomaly detection in online data sources
Proceedings of the 24th International Conference on Software Engineering
Visualization of test information to assist fault localization
Proceedings of the 24th International Conference on Software Engineering
Isolating cause-effect chains from computer programs
ACM SIGSOFT Software Engineering Notes
Pinpoint: Problem Determination in Large, Dynamic Internet Services
DSN '02 Proceedings of the 2002 International Conference on Dependable Systems and Networks
Why Programs Fail: A Guide to Systematic Debugging
Why Programs Fail: A Guide to Systematic Debugging
Proceedings of the 5th international conference on Generative programming and component engineering
Problem diagnosis in large-scale computing environments
Proceedings of the 2006 ACM/IEEE conference on Supercomputing
Automated known problem diagnosis with event traces
Proceedings of the 1st ACM SIGOPS/EuroSys European Conference on Computer Systems 2006
An Empirical Study of Test Case Filtering Techniques Based on Exercising Information Flows
IEEE Transactions on Software Engineering
Detecting object usage anomalies
Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Dynamic Detection of COTS Component Incompatibility
IEEE Software
Using Machine Learning to Support Debugging with Tarantula
ISSRE '07 Proceedings of the The 18th IEEE International Symposium on Software Reliability
Time will tell: fault localization using time spectra
Proceedings of the 30th international conference on Software engineering
Automated Identification of Failure Causes in System Logs
ISSRE '08 Proceedings of the 2008 19th International Symposium on Software Reliability Engineering
Automated analysis of load testing results
Proceedings of the 19th international symposium on Software testing and analysis
Supporting plug-in mashes to ease tool integration
Proceedings of the 1st Workshop on Developing Tools as Plug-ins
Identifying program, test, and environmental changes that affect behaviour
Proceedings of the 33rd International Conference on Software Engineering
Automatically describing software faults
Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering
Hi-index | 0.00 |
Dynamic analysis techniques have been extensively adopted to discover causes of observed failures. In particular, anomaly detection techniques can infer behavioral models from observed legal executions and compare failing executions with the inferred models to automatically identify the likely anomalous events that caused observed failures. Unfortunately the output of these techniques is limited to a set of independent suspicious anomalous events that does not capture the structure and the rationale of the differences between the correct and the failing executions. Thus, testers spend a relevant amount of time and effort to investigate executions and interpret these differences, reducing effectiveness of anomaly detection techniques. In this paper, we present Automata Violations Analyzer (AVA), a technique to automatically produce candidate interpretations of detected failures from anomalies identified by anomaly detection techniques. Interpretations capture the rationale of the differences between legal and failing executions with user understandable patterns that simplify identification of failure causes. The empirical validation with synthetic cases and third-party systems shows that AVA produces useful interpretations.