Classification of reusable modules
Software reusability: vol. 1, concepts and models
The Cost of Data Flow Testing: An Empirical Study
IEEE Transactions on Software Engineering
Software testing techniques (2nd ed.)
Software testing techniques (2nd ed.)
Support for comprehensive reuse
Software Engineering Journal - Special issue on software process and its support
Software engineering (5th ed.)
Software engineering (5th ed.)
Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria
ICSE '94 Proceedings of the 16th international conference on Software engineering
Comparing and combining software defect detection techniques: a replicated empirical study
ESEC '97/FSE-5 Proceedings of the 6th European SOFTWARE ENGINEERING conference held jointly with the 5th ACM SIGSOFT international symposium on Foundations of software engineering
Software engineering: theory and practice
Software engineering: theory and practice
Further empirical studies of test effectiveness
SIGSOFT '98/FSE-6 Proceedings of the 6th ACM SIGSOFT international symposium on Foundations of software engineering
Proceedings of the Conference on The Future of Software Engineering
Art of Software Testing
An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing
IEEE Transactions on Software Engineering
Defining factors, goals and criteria for reusable component evaluation
CASCON '96 Proceedings of the 1996 conference of the Centre for Advanced Studies on Collaborative research
Modelling the application domains of software engineering technologies
ASE '97 Proceedings of the 12th international conference on Automated software engineering (formerly: KBSE)
Accelerating the Successful Reuse of Problem Solving Knowledge Through the Domain Lifecycle
ICSR '96 Proceedings of the 4th International Conference on Software Reuse
Reviewing 25 Years of Testing Technique Experiments
Empirical Software Engineering
A state-of-practice questionnaire on verification and validation for concurrent programs
Proceedings of the 2006 workshop on Parallel and distributed systems: testing and debugging
WoSQ '07 Proceedings of the 5th International Workshop on Software Quality
Supporting the selection of model-based testing approaches for software projects
Proceedings of the 3rd international workshop on Automation of software test
An approach for component testing and its empirical validation
Proceedings of the 2009 ACM symposium on Applied Computing
Model-based testing approaches selection for software projects
Information and Software Technology
Measuring testing as a distributed component of the software life cycle
Journal of Computational Methods in Sciences and Engineering
Evaluation of {model-based} testing techniques selection approaches: An external replication
ESEM '09 Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement
Applying empirical software engineering to software architecture: challenges and lessons learned
Empirical Software Engineering
Determining organization-specific process suitability
ICSP'10 Proceedings of the 2010 international conference on New modeling concepts for today's software processes: software process
An analysis of a comprehensive planning framework for customizing SQA
NSEC '10 Proceedings of the 2010 National Software Engineering Conference
Towards a reasoning framework for software product line testing
Proceedings of the 16th International Software Product Line Conference - Volume 2
An architectural model for software testing lesson learned systems
Information and Software Technology
SBIA'12 Proceedings of the 21st Brazilian conference on Advances in Artificial Intelligence
Proceedings of the Winter Simulation Conference
Testing techniques selection based on ODC fault types and software metrics
Journal of Systems and Software
A learning-based method for combining testing techniques
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
One of the major problems within the software testing area is how to get a suitable set of cases to test a software system. This set should assure maximum effectiveness with the least possible number of test cases. There are now numerous testing techniques available for generating test cases. However, many are never used, and just a few are used over and over again. Testers have little (if any) information about the available techniques, their usefulness and, generally, how suited they are to the project at hand upon, which to base their decision on which testing techniques to use. This paper presents the results of developing and evaluating an artefact (specifically, a characterisation schema) to assist with testing technique selection. When instantiated for a variety of techniques, the schema provides developers with a catalogue containing enough information for them to select the best suited techniques for a given project. This assures that the decisions they make are based on objective knowledge of the techniques rather than perceptions, suppositions and assumptions.