A mapping study on empirical evidence related to the models and forms used in the uml
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
An investigation of use case quality in a large safety-critical software development project
Information and Software Technology
What's up with software metrics? - A preliminary mapping study
Journal of Systems and Software
The educational value of mapping studies of software engineering literature
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1
Information and Software Technology
Measuring and predicting software productivity: A systematic map and review
Information and Software Technology
Identifying relevant studies in software engineering
Information and Software Technology
Reporting computing projects through structured abstracts: a quasi-experiment
Empirical Software Engineering
Preliminary reporting guidelines for experience papers
EASE'09 Proceedings of the 13th international conference on Evaluation and Assessment in Software Engineering
The value of mapping studies: a participantobserver case study
EASE'10 Proceedings of the 14th international conference on Evaluation and Assessment in Software Engineering
On searching relevant studies in software engineering
EASE'10 Proceedings of the 14th international conference on Evaluation and Assessment in Software Engineering
What scope is there for adopting evidence-informed teaching in SE?
Proceedings of the 34th International Conference on Software Engineering
On evaluating commercial Cloud services: A systematic review
Journal of Systems and Software
Hi-index | 0.01 |
When conducting a systematic literature review, researchers usually determine the relevance of primary studies on the basis of the title and abstract. However, experience indicates that the abstracts for many software engineering papers are of too poor a quality to be used for this purpose. A solution adopted in other domains is to employ structured abstracts to improve the quality of information provided. This study consists of a formal experiment to investigate whether structured abstracts are more complete and easier to understand than non-structured abstracts for papers that describe software engineering experiments. We constructed structured versions of the abstracts for a random selection of 25 papers describing software engineering experiments. The 64 participants were each presented with one abstract in its original unstructured form and one in a structured form, and for each one were asked to assess its clarity (measured on a scale of 1 to 10) and completeness (measured with a questionnaire that used 18 items). Based on a regression analysis that adjusted for participant, abstract, type of abstract seen first, knowledge of structured abstracts, software engineering role, and preference for conventional or structured abstracts, the use of structured abstracts increased the completeness score by 6.65 (SE 0.37, p驴p驴