Testing object-oriented systems: models, patterns, and tools
Testing object-oriented systems: models, patterns, and tools
Incorporating varying test costs and fault severities into test case prioritization
ICSE '01 Proceedings of the 23rd International Conference on Software Engineering
The distribution of faults in a large industrial software system
ISSTA '02 Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis
Testing Processes of Web Applications
Annals of Software Engineering
IEEE Software
Quality Attributes of Web Software Applications
IEEE Software
Improving web application testing with user session data
Proceedings of the 25th International Conference on Software Engineering
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
Bypass Testing of Web Applications
ISSRE '04 Proceedings of the 15th International Symposium on Software Reliability Engineering
A Controlled Experiment Assessing Test Case Prioritization Techniques via Mutation Faults
ICSM '05 Proceedings of the 21st IEEE International Conference on Software Maintenance
ICSM '05 Proceedings of the 21st IEEE International Conference on Software Maintenance
Automated replay and failure detection for web applications
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
diffX: an algorithm to detect changes in multi-version XML documents
CASCON '05 Proceedings of the 2005 conference of the Centre for Advanced Studies on Collaborative research
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Web error classification and analysis for reliability improvement
Journal of Systems and Software
Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults
IEEE Transactions on Software Engineering
Learning Effective Oracle Comparator Combinations for Web Applications
QSIC '07 Proceedings of the Seventh International Conference on Quality Software
Automated Oracle Comparators for TestingWeb Applications
ISSRE '07 Proceedings of the The 18th IEEE International Symposium on Software Reliability
Strategies for automatically exposing faults in web applications
Strategies for automatically exposing faults in web applications
Relationships between Test Suites, Faults, and Fault Detection in GUI Testing
ICST '08 Proceedings of the 2008 International Conference on Software Testing, Verification, and Validation
A metric for software readability
ISSTA '08 Proceedings of the 2008 international symposium on Software testing and analysis
Finding bugs in dynamic web applications
ISSTA '08 Proceedings of the 2008 international symposium on Software testing and analysis
Web application fault classification - an exploratory study
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Comparing Error Detection Techniques for Web Applications: An Experimental Study
NCA '08 Proceedings of the 2008 Seventh IEEE International Symposium on Network Computing and Applications
Empirical Validation of a Web Fault Taxonomy and its usage for Fault Seeding
WSE '07 Proceedings of the 2007 9th IEEE International Workshop on Web Site Evolution
Harnessing Web-Based Application Similarities to Aid in Regression Testing
ISSRE '09 Proceedings of the 2009 20th International Symposium on Software Reliability Engineering
Augmenting test suites effectiveness by increasing output diversity
Proceedings of the 34th International Conference on Software Engineering
Hi-index | 0.00 |
Despite the growing usage of web applications, extreme resource constraints during their development frequently leave them inadequately tested. Because testing may be perceived as having a low return on investment for web applications, we believe that providing a consumer-perceived fault severity model could allow developers to prioritize faults according to their likelihood of impacting consumer retention, encouraging web application developers to test more effectively. In a study involving 386 humans and 800 web application faults, we observe that an arbitrary human judgment of fault severity is unreliable. We thus present two models of fault severity that outperform individual humans in terms of correctly predicting the average consumer-perceived severity of web application faults. Our first model uses human annotations of fault surface features, and is 87% accurate at identifying low-priority, non-severe faults. We also present a fully automated conservative model that correctly identifies 55% of non-severe faults without missing any severe faults. Both models outperform humans at flagging severe faults, and can substitute or reinforce humans by prioritizing faults encountered in web application development and testing.