Integrating automated test generation into the WYSIWYT spreadsheet testing methodology

  • Authors:
  • Marc Fisher, II;Gregg Rothermel;Darren Brown;Mingming Cao;Curtis Cook;Margaret Burnett

  • Affiliations:
  • University of Nebraska Lincoln, Lincoln, NE;University of Nebraska Lincoln, Lincoln, NE;Oregon State University, Corvallis, OR;Oregon State University, Corvallis, OR;Oregon State University, Corvallis, OR;Oregon State University, Corvallis, OR

  • Venue:
  • ACM Transactions on Software Engineering and Methodology (TOSEM)
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Spreadsheet languages, which include commercial spreadsheets and various research systems, have had a substantial impact on end-user computing. Research shows, however, that spreadsheets often contain faults. Thus, in previous work we presented a methodology that helps spreadsheet users test their spreadsheet formulas. Our empirical studies have shown that end users can use this methodology to test spreadsheets more adequately and efficiently; however, the process of generating test cases can still present a significant impediment. To address this problem, we have been investigating how to incorporate automated test case generation into our testing methodology in ways that support incremental testing and provide immediate visual feedback. We have used two techniques for generating test cases, one involving random selection and one involving a goal-oriented approach. We describe these techniques and their integration into our testing environment, and report results of an experiment examining their effectiveness and efficiency.