Handbook of software reliability engineering
Handbook of software reliability engineering
Deterministic replay of Java multithreaded applications
SPDT '98 Proceedings of the SIGMETRICS symposium on Parallel and distributed tools
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Software Reliability Engineered Testing
Software Reliability Engineered Testing
Java Virtual Machine Specification
Java Virtual Machine Specification
Feature Engineering for Text Classification
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Ordering Fault-Prone Software Modules
Software Quality Control
OOPSLA '04 Companion to the 19th annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Predicting the Location and Number of Faults in Large Software Systems
IEEE Transactions on Software Engineering
BugNet: Continuously Recording Program Execution for Deterministic Replay Debugging
Proceedings of the 32nd annual international symposium on Computer Architecture
Crash Data Collection: A Windows Case Study
DSN '05 Proceedings of the 2005 International Conference on Dependable Systems and Networks
Mining metrics to predict component failures
Proceedings of the 28th international conference on Software engineering
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
An empirical study of Java bytecode programs
Software—Practice & Experience
A Technique for Enabling and Supporting Debugging of Field Failures
ICSE '07 Proceedings of the 29th international conference on Software Engineering
Valgrind: a framework for heavyweight dynamic binary instrumentation
Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation
Replay debugging for distributed applications
ATEC '06 Proceedings of the annual conference on USENIX '06 Annual Technical Conference
Windows XP kernel crash analysis
LISA '06 Proceedings of the 20th conference on Large Installation System Administration
Training on errors experiment to detect fault-prone software modules by spam filter
Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Extracting structural information from bug reports
Proceedings of the 2008 international working conference on Mining software repositories
Implications of ceiling effects in defect predictors
Proceedings of the 4th international workshop on Predictor models in software engineering
Classifying Software Changes: Clean or Buggy?
IEEE Transactions on Software Engineering
ReCrash: Making Software Failures Reproducible by Preserving Object States
ECOOP '08 Proceedings of the 22nd European conference on Object-Oriented Programming
Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering
Reliability Assessment of Mass-Market Software: Insights from Windows Vista®
ISSRE '08 Proceedings of the 2008 19th International Symposium on Software Reliability Engineering
Review: A systematic review of software fault prediction studies
Expert Systems with Applications: An International Journal
Cross-project defect prediction: a large scale experiment on data vs. domain vs. process
Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
IEEE Transactions on Software Engineering
Failure is a four-letter word: a parody in empirical research
Proceedings of the 7th International Conference on Predictive Models in Software Engineering
IEEE Transactions on Software Engineering
Hi-index | 0.00 |
Software monitoring systems have high performance overhead because they typically monitor all processes of the running program. For example, to capture and replay crashes, most current systems monitor all methods; thus yielding a significant performance overhead. Lowering the number of methods being monitored to a smaller subset can dramatically reduce this overhead. We present an approach that can help arrive at such a subset by reliably identifying methods that are the most likely candidates to crash in a future execution of the software. Our approach involves learning patterns from features of methods that previously crashed to classify new methods as crash-prone or non-crash-prone. An evaluation of our approach on two large open source projects, ASPECTJ and ECLIPSE, shows that we can correctly classify crash-prone methods with an accuracy of 80--86%. Notably, we found that the classification models can also be used for cross-project prediction with virtually no loss in classification accuracy. In a further experiment, we demonstrate how a monitoring tool, RECRASH could take advantage of only monitoring crash-prone methods and thereby, reduce its performance overhead and maintain its ability to perform its intended tasks.