Scalability improvement in software evaluation methodologies

  • Authors:
  • Hamdy Ibrahim;Behrouz H. Far;Armin Eberlein

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Calgary, Canada;Department of Electrical and Computer Engineering, University of Calgary, Canada;Department of Computer Science & Engineering, American University of Sharjah, UAE

  • Venue:
  • IRI'09 Proceedings of the 10th IEEE international conference on Information Reuse & Integration
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluation of software is critical in a world that increasingly relies on software. Several software evaluation methodologies have been developed, but as software solutions increases in number and size, many of them do not scale. Improving scalability of software evaluation methodologies is a challenge and failing to reach a reasonable scalability level likely constrains the adoption of an evaluation methodology. In this paper, a framework for improving scalability of software evaluation methodologies is proposed. The proposed framework relies on three keystones: categorization of evaluation criteria, dependency among criteria, and methodology adaptation. A case study is conducted to demonstrate how the proposed framework is used to improve the scalability of a hybrid evaluation model which is used to rank commercial off-the-shelf (COTS) products for a library system and select the best candidate. The case study is also used to determine which keystones are effective in improving scalability.