Facilitating and automating empirical evaluation

  • Authors:
  • Laurian Hobby;John Booker;D. Scott McCrickard;C. M. Chewar;Jason Zietz

  • Affiliations:
  • Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg, VA;Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg, VA;Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg, VA;Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg, VA;Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg, VA

  • Venue:
  • Proceedings of the 43rd annual Southeast regional conference - Volume 1
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Through the automation of empirical evaluation we hope to alleviate evaluation problems encountered by software designers who are relatively new to the process. Barriers to good empirical evaluation include the tedium of setting up a new test for each project, as well as the time and expertise needed to set up a quality test. We hope to make the evaluation process more accessible to a wider variety of software designers by reducing the time and effort. required for evaluation through the use of a wizard-like system that does not require expertise in evaluation techniques. Implementation is accomplished by utilizing a library of design knowledge in the form of claims to focus the evaluations. User tests were performed to evaluate receptiveness to the software tool as well at the performance of the underlying methods. Results were positive and provide a justification for further research into this area as well as exposing problem areas for improvement.