An experimental evaluation of the assumption of independence in multiversion programming
IEEE Transactions on Software Engineering
An experimental determination of sufficient mutant operators
ACM Transactions on Software Engineering and Methodology (TOSEM)
A Methodology for Testing Intrusion Detection Systems
IEEE Transactions on Software Engineering
Software testing: a machine learning experiment
CSC '95 Proceedings of the 1995 ACM 23rd annual conference on Computer science
Machine Learning and Software Engineering
Software Quality Control
Pseudo-oracles for non-testable programs
ACM '81 Proceedings of the ACM '81 conference
COMPSAC '03 Proceedings of the 27th Annual International Conference on Computer Software and Applications
A method for modeling and quantifying the security attributes of intrusion tolerant systems
Performance Evaluation - Dependable systems and networks-performance and dependability symposium (DSN-PDS) 2002: Selected papers
Testing Context-Sensitive Middleware-Based Software Applications
COMPSAC '04 Proceedings of the 28th Annual International Computer Software and Applications Conference - Volume 01
Orange: from experimental machine learning to interactive data mining
PKDD '04 Proceedings of the 8th European Conference on Principles and Practice of Knowledge Discovery in Databases
Is mutation an appropriate tool for testing experiments?
Proceedings of the 27th international conference on Software engineering
MuJava: an automated class mutation system: Research Articles
Software Testing, Verification & Reliability
BioWeka---extending the Weka framework for bioinformatics
Bioinformatics
QSIC '07 Proceedings of the Seventh International Conference on Quality Software
Saner: Composing Static and Dynamic Analysis to Validate Sanitization in Web Applications
SP '08 Proceedings of the 2008 IEEE Symposium on Security and Privacy
Novel Applications of Machine Learning in Software Testing
QSIC '08 Proceedings of the 2008 The Eighth International Conference on Quality Software
ICST '09 Proceedings of the 2009 International Conference on Software Testing Verification and Validation
Application of Metamorphic Testing to Supervised Classifiers
QSIC '09 Proceedings of the 2009 Ninth International Conference on Quality Software
Introduction to Machine Learning
Introduction to Machine Learning
Metamorphic Testing of Stochastic Optimisation
ICSTW '10 Proceedings of the 2010 Third International Conference on Software Testing, Verification, and Validation Workshops
Automated Test Data Generation on the Analyses of Feature Models: A Metamorphic Testing Approach
ICST '10 Proceedings of the 2010 Third International Conference on Software Testing, Verification and Validation
Semi-Proving: An Integrated Method for Program Proving, Testing, and Debugging
IEEE Transactions on Software Engineering
Motif discovery through predictive modeling of gene regulation
RECOMB'05 Proceedings of the 9th Annual international conference on Research in Computational Molecular Biology
An Evaluation of Random Testing
IEEE Transactions on Software Engineering
Metamorphic slice: An application in spectrum-based fault localization
Information and Software Technology
Hi-index | 0.00 |
Abstract: Machine learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no ''test oracle'' to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique ''metamorphic testing'', which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program.