Skoll: Distributed Continuous Quality Assurance

  • Authors:
  • A. Memon;A. Porter;C. Yilmaz;A. Nagarajan;D. Schmidt;B. Natarajan

  • Affiliations:
  • University of Maryland at College Park;University of Maryland at College Park;University of Maryland at College Park;University of Maryland at College Park;Vanderbilt University;Vanderbilt University

  • Venue:
  • Proceedings of the 26th International Conference on Software Engineering
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Quality assurance (QA) tasks, such as testing, profiling,and performance evaluation, have historically been donein-house on developer-generated workloads and regressionsuites. Since this approach is inadequate for many systems,tools and processes are being developed to improve softwarequality by increasing user participation in the QA process.A limitation of these approaches is that they focus onisolated mechanisms, not on the coordination and controlpolicies and tools needed to make the global QA process efficient, effective, and scalable. To address these issues, we have initiated the Skoll project, which is developing and validatingnovel software QA processes and tools that leveragethe extensive computing resources of worldwide user communitiesin a distributed, continuous manner to significantlyand rapidly improve software quality. This paper providesseveral contributions to the study of distributed continuousQA. First, it illustrates the structure and functionality ofa generic around-the-world, around-the-clock QA processand describes several sophisticated tools that support thisprocess. Second, it describes several QA scenarios builtusing these tools and process. Finally, it presents a feasibilitystudy applying these scenarios to a 1MLOC+ softwarepackage called ACE+TAO. While much work remains to bedone, the study suggests that the Skoll process and toolseffectively manage and control distributed, continuous QAprocesses. Using Skoll we rapidly identified problems thathad taken the ACE+TAO developers substantially longer tofind and several of which had previously not been found.Moreover, automatic analysis of QA task results often provideddevelopers information that quickly led them to theroot cause of the problems.