The complexity of data flow criteria for test data selection
Information Processing Letters
The category-partition method for specifying and generating fuctional tests
Communications of the ACM
The Cost of Data Flow Testing: An Empirical Study
IEEE Transactions on Software Engineering
C4.5: programs for machine learning
C4.5: programs for machine learning
A methodology for controlling the size of a test suite
ACM Transactions on Software Engineering and Methodology (TOSEM)
SCG '94 Proceedings of the tenth annual symposium on Computational geometry
An experimental evaluation of data flow and mutation testing
Software—Practice & Experience
Analyzing Regression Test Selection Techniques
IEEE Transactions on Software Engineering
Dividing strategies for the optimization of a test suite
Information Processing Letters
Incorporating varying test costs and fault severities into test case prioritization
ICSE '01 Proceedings of the 23rd International Conference on Software Engineering
Test Case Prioritization: A Family of Empirical Studies
IEEE Transactions on Software Engineering
Effectively prioritizing tests in development environment
ISSTA '02 Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis
Knowledge Acquisition Via Incremental Conceptual Clustering
Machine Learning
Mutation Versus All-uses: An Empirical Evaluation of Cost, Strength and Effectiveness
Software Quality and Productivity: Theory, practice and training
Test-Suite Reduction and Prioritization for Modified Condition/Decision Coverage
ICSM '01 Proceedings of the IEEE International Conference on Software Maintenance (ICSM'01)
Test Case Design Based on Z and the Classification-Tree Method
ICFEM '97 Proceedings of the 1st International Conference on Formal Engineering Methods
A Study of Effective Regression Testing in Practice
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
Active learning for automatic classification of software behavior
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
On test suite composition and cost-effective regression testing
ACM Transactions on Software Engineering and Methodology (TOSEM)
Constraint based structural testing criteria
Journal of Systems and Software
Using Machine Learning to Refine Black-Box Test Specifications and Test Suites
QSIC '08 Proceedings of the 2008 The Eighth International Conference on Quality Software
Novel Applications of Machine Learning in Software Testing
QSIC '08 Proceedings of the 2008 The Eighth International Conference on Quality Software
Decreasing the cost of mutation testing with second-order mutants
Software Testing, Verification & Reliability
A machine learning approach for statistical software testing
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Functional test generation using efficient property clustering and learning techniques
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
An Improved Regression Test Selection Technique by Clustering Execution Profiles
QSIC '10 Proceedings of the 2010 10th International Conference on Quality Software
Using Semi-supervised Clustering to Improve Regression Test Selection Techniques
ICST '11 Proceedings of the 2011 Fourth IEEE International Conference on Software Testing, Verification and Validation
Intelligent Test Oracle Construction for Reactive Systems without Explicit Specifications
DASC '11 Proceedings of the 2011 IEEE Ninth International Conference on Dependable, Autonomic and Secure Computing
An approach for clustering test data
LATW '11 Proceedings of the 2011 12th Latin American Test Workshop
Hi-index | 0.00 |
Software testing techniques and criteria are considered complementary since they can reveal different kinds of faults and test distinct aspects of the program. The functional criteria, such as Category Partition, are difficult to be automated and are usually manually applied. Structural and fault-based criteria generally provide measures to evaluate test sets. The existing supporting tools produce a lot of information including: input and produced output, structural coverage, mutation score, faults revealed, etc. However, such information is not linked to functional aspects of the software. In this work, we present an approach based on machine learning techniques to link test results from the application of different testing techniques. The approach groups test data into similar functional clusters. After this, according to the tester's goals, it generates classifiers (rules) that have different uses, including selection and prioritization of test cases. The paper also presents results from experimental evaluations and illustrates such uses.