Feature location via information retrieval based filtering of a single scenario execution trace
Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering
Can Better Identifier Splitting Techniques Help Feature Location?
ICPC '11 Proceedings of the 2011 IEEE 19th International Conference on Program Comprehension
Integrated impact analysis for managing software changes
Proceedings of the 34th International Conference on Software Engineering
Using Bug Report Similarity to Enhance Bug Localisation
WCRE '12 Proceedings of the 2012 19th Working Conference on Reverse Engineering
Empirical Software Engineering
Triaging incoming change requests: Bug or commit history, or code authorship?
ICSM '12 Proceedings of the 2012 IEEE International Conference on Software Maintenance (ICSM)
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
Approaches that support software maintenance need to be evaluated and compared against existing ones, in order to demonstrate their usefulness in practice. However, oftentimes the lack of well-established sets of benchmarks leads to situations where these approaches are evaluated using different datasets, which results in biased comparisons. In this data paper we describe and make publicly available a set of benchmarks from six Java applications, which can be used in the evaluation of various software engineering (SE) tasks, such as feature location and impact analysis. These datasets consist of textual description of change requests, the locations in the source code where they were implemented, and execution traces. Four of the benchmarks were already used in several SE research papers, and two of them are new. In addition, we describe in detail the methodology used for generating these benchmarks and provide a suite of tools in order to encourage other researchers to validate our datasets and generate new benchmarks for other subject software systems. Our online appendix: http://www.cs.wm.edu/semeru/data/msr13/