On Using Learning Curves to Evaluate ITS

  • Authors:
  • Brent Martin;Kenneth R. Koedinger;Antonija Mitrovic;Santosh Mathan

  • Affiliations:
  • Intelligent Computer Tutoring Group, University of Canterbury, Private Bag 4800, Christchurch, New Zealand, {brent,tanja}@cosc.canterbury.ac.nz;HCI Institute, Carnegie Mellon University, Pittsburgh, PA 15213;Intelligent Computer Tutoring Group, University of Canterbury, Private Bag 4800, Christchurch, New Zealand, {brent,tanja}@cosc.canterbury.ac.nz;HCI Institute, Carnegie Mellon University, Pittsburgh, PA 15213

  • Venue:
  • Proceedings of the 2005 conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Measuring the efficacy of ITS can be hard because there are many confounding factors: short, well-isolated studies suffer from insufficient interaction with the system, while longer studies may be affected by the students' other learning activities. Coarse measurements such as pre-and post-testing are often inconclusive. Learning curves are an alternative tool: slope and fit of learning curves show the rate at which the student learns, and reveal how well the system model fits what the student is learning. The downside is that they are extremely sensitive to changes in the system's setup, which arguably makes them useless for comparing different tutors. We describe these problems in detail and our experiences with them. We also suggest some other ways of using learning curves that may be more useful for making such comparisons.