Using benchmarking to advance research: a challenge to software engineering
Proceedings of the 25th International Conference on Software Engineering
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
An infrastructure to support interoperability in reverse engineering
Information and Software Technology
New Frontiers of Reverse Engineering
FOSE '07 2007 Future of Software Engineering
Toward automatic artifact matching for tool evaluation
Proceedings of the 47th Annual Southeast Regional Conference
Discovery of architectural layers and measurement of layering violations in source code
Journal of Systems and Software
Improving code completion with program history
Automated Software Engineering
Hi-index | 0.00 |
In this paper, we take the concept of benchmarking as used extensively in computing and apply it to evaluating C++ fact extractors. We demonstrated the efficacy of this approach bydeveloping a prototype benchmark, CppETS 1.0 (C++ Extractor Test Suite, pronounced see-pets) and collecting feedback in a workshop setting. The CppETS benchmark characterises C++ extractors along two dimensions: Accuracy and Robustness. It consists of a series of test buckets that contain small C++ programs and related questions that pose different challenges to the extractors. As with other research areas, benchmarks are best developed through technical work and consultation with a community, so we invited researchers to apply CppETS to their extractors and report on their results in a workshop. Four teams participated in this effort, evaluating Ccia, cppx, the Rigi C++ parser, and TkSee/SN. They found that CppETS gave results that were consistent with their experience with thesetools and therefore had good external validity. Workshop participants agreed that CppETS was an important contribution to fact extractor development and testing. Further efforts to make CppETS a widely-accepted benchmark will involve technical improvements and collaboration with the broader community.