ReVis: Reverse Engineering by Clustering and Visual Object Classification
ASWEC '00 Proceedings of the 2000 Australian Software Engineering Conference
A Simulation Study of the Model Evaluation Criterion MMRE
IEEE Transactions on Software Engineering
Evidence-Based Software Engineering
Proceedings of the 26th International Conference on Software Engineering
Evidence-Based Software Engineering for Practitioners
IEEE Software
A Survey of Controlled Experiments in Software Engineering
IEEE Transactions on Software Engineering
Requirements engineering paper classification and evaluation criteria: a proposal and a discussion
Requirements Engineering
Journal of Systems and Software
Visual text mining using association rules
Computers and Graphics
Experiences using systematic review guidelines
Journal of Systems and Software
Developing Search Strategies for Detecting Relevant Experiments for Systematic Reviews
ESEM '07 Proceedings of the First International Symposium on Empirical Software Engineering and Measurement
Empirical studies of agile software development: A systematic review
Information and Software Technology
Systematic literature reviews in software engineering - A systematic literature review
Information and Software Technology
Guidelines for conducting and reporting case study research in software engineering
Empirical Software Engineering
The educational value of mapping studies of software engineering literature
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1
Systematic literature reviews in software engineering - A tertiary study
Information and Software Technology
Can we evaluate the quality of software engineering experiments?
Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
How Reliable Are Systematic Reviews in Empirical Software Engineering?
IEEE Transactions on Software Engineering
Six years of systematic literature reviews in software engineering: An updated tertiary study
Information and Software Technology
Validity and reliability of evaluation procedures in comparative studies of effort prediction models
Empirical Software Engineering
Reference-based search strategies in systematic reviews
EASE'09 Proceedings of the 13th international conference on Evaluation and Assessment in Software Engineering
Experiences using systematic review guidelines
EASE'06 Proceedings of the 10th international conference on Evaluation and Assessment in Software Engineering
EASE'06 Proceedings of the 10th international conference on Evaluation and Assessment in Software Engineering
Conducting a systematic literature review from the perspective of a Ph. D. student
EASE'06 Proceedings of the 10th international conference on Evaluation and Assessment in Software Engineering
Systematic mapping studies in software engineering
EASE'08 Proceedings of the 12th international conference on Evaluation and Assessment in Software Engineering
A follow-up empirical evaluation of evidence based software engineering by undergraduate students
EASE'08 Proceedings of the 12th international conference on Evaluation and Assessment in Software Engineering
Information and Software Technology
Hi-index | 0.00 |
Context: Many researchers adopting systematic reviews (SRs) have also published papers discussing problems with the SR methodology and suggestions for improving it. Since guidelines for SRs in software engineering (SE) were last updated in 2007, we believe it is time to investigate whether the guidelines need to be amended in the light of recent research. Objective: To identify, evaluate and synthesize research published by software engineering researchers concerning their experiences of performing SRs and their proposals for improving the SR process. Method: We undertook a systematic review of papers reporting experiences of undertaking SRs and/or discussing techniques that could be used to improve the SR process. Studies were classified with respect to the stage in the SR process they addressed, whether they related to education or problems faced by novices and whether they proposed the use of textual analysis tools. Results: We identified 68 papers reporting 63 unique studies published in SE conferences and journals between 2005 and mid-2012. The most common criticisms of SRs were that they take a long time, that SE digital libraries are not appropriate for broad literature searches and that assessing the quality of empirical studies of different types is difficult. Conclusion: We recommend removing advice to use structured questions to construct search strings and including advice to use a quasi-gold standard based on a limited manual search to assist the construction of search stings and evaluation of the search process. Textual analysis tools are likely to be useful for inclusion/exclusion decisions and search string construction but require more stringent evaluation. SE researchers would benefit from tools to manage the SR process but existing tools need independent validation. Quality assessment of studies using a variety of empirical methods remains a major problem.