Software errors and complexity: an empirical investigation0
Communications of the ACM
Evaluating Software Complexity Measures
IEEE Transactions on Software Engineering
How many paths are needed for branch testing?
Journal of Systems and Software - Special issue on software reliability issues
Handbook of Evolutionary Computation
Handbook of Evolutionary Computation
Software Cost Estimation with Cocomo II with Cdrom
Software Cost Estimation with Cocomo II with Cdrom
Towards a Framework for Software Measurement Validation
IEEE Transactions on Software Engineering
Third time charm: Stronger prediction of programmer performance by software complexity metrics
ICSE '79 Proceedings of the 4th international conference on Software engineering
A nesting level complexity measure
ACM SIGPLAN Notices
Controlling Software Projects: Management, Measurement, and Estimates
Controlling Software Projects: Management, Measurement, and Estimates
Evaluation and comparison of cognitive complexity measure
ACM SIGSOFT Software Engineering Notes
IEEE Transactions on Software Engineering
Predicting faults using the complexity of code changes
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
The Research on Software Metrics and Software Complexity Metrics
IFCSTA '09 Proceedings of the 2009 International Forum on Computer Science-Technology and Applications - Volume 01
On the ability of complexity metrics to predict fault-prone classes in object-oriented systems
Journal of Systems and Software
IEEE Transactions on Software Engineering
Evolutionary repair of faulty software
Applied Soft Computing
The Art of Software Testing
Regression testing minimization, selection and prioritization: a survey
Software Testing, Verification & Reliability
Predicting software development errors using software complexity metrics
IEEE Journal on Selected Areas in Communications
AUSTIN: An open source tool for search based software testing of C programs
Information and Software Technology
Evolutionary algorithms for the multi-objective test data generation problem
Software—Practice & Experience
An orchestrated survey of methodologies for automated software test case generation
Journal of Systems and Software
Hi-index | 0.00 |
Context: Complexity measures provide us some information about software artifacts. A measure of the difficulty of testing a piece of code could be very useful to take control about the test phase. Objective: The aim in this paper is the definition of a new measure of the difficulty for a computer to generate test cases, we call it Branch Coverage Expectation (BCE). We also analyze the most common complexity measures and the most important features of a program. With this analysis we are trying to discover whether there exists a relationship between them and the code coverage of an automatically generated test suite. Method: The definition of this measure is based on a Markov model of the program. This model is used not only to compute the BCE, but also to provide an estimation of the number of test cases needed to reach a given coverage level in the program. In order to check our proposal, we perform a theoretical validation and we carry out an empirical validation study using 2600 test programs. Results: The results show that the previously existing measures are not so useful to estimate the difficulty of testing a program, because they are not highly correlated with the code coverage. Our proposed measure is much more correlated with the code coverage than the existing complexity measures. Conclusion: The high correlation of our measure with the code coverage suggests that the BCE measure is a very promising way of measuring the difficulty to automatically test a program. Our proposed measure is useful for predicting the behavior of an automatic test case generator.