Empirical study on benchmarking software development tasks

  • Authors:
  • Li Ruan;Yongji Wang;Qing Wang;Mingshu Li;Yun Yang;Lizi Xie;Dapeng Liu;Haitao Zeng;Shen Zhang;Junchao Xiao;Lei Zhang;M. Wasif Nisar;Jian Dai

  • Affiliations:
  • Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China and Center for Information Technology Research, Swinburne University of Technology, Australia;Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China;Institute of Software, Chinese Academy of Sciences, Beijing, China and Graduate University, Chinese Academy of Sciences, Beijing, China

  • Venue:
  • ICSP'07 Proceedings of the 2007 international conference on Software process
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Benchmarking is one of the most important methods to learn the best practices for software process improvement. However, in current software process context, benchmarking is mainly for projects rather than software development tasks. Can we benchmark software development tasks? If so, how to? Moreover, benchmarking software development tasks has to deal with multivariate and variable return to scale (VRS). This paper reports practical experience of benchmarking software development tasks under multivariate and VRS constraints using Data Envelopment Analysis (DEA). The analysis of experience data in Institute of Software, Chinese Academy of Sciences (ISCAS) indicates that the ideas and techniques of benchmarking software projects can be deployed at the software development task level. Moreover, results also show that DEA VRS model allows the developers to gain new insight about how to identify the relatively efficient tasks as the task performance benchmark and how to establish different reference sets for each relatively inefficient task under multivariate and VRS constraints. We thus recommend DEA VRS model be used as the default technique for appropriately benchmarking software development tasks. Our results are beneficial to software process improvement. To the best of our knowledge, we believe that it is the first time to report such comprehensive and repeatable results of benchmarking software development tasks using DEA.