Using Item Response Theory (IRT) to select hints in an ITS

  • Authors:
  • Michael J. Timms

  • Affiliations:
  • WestEd, 300 Lakeside Drive, 25th Floor, Oakland, CA 94612-3534, USA, (510) 302 4214, mtimms@wested.org

  • Venue:
  • Proceedings of the 2007 conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many Intelligent Tutoring Systems (ITSs) are designed to tutor students by providing error feedback and hints as students solve a series of problems, where the tutoring system's feedback is based on a model of the student's ability in the subject domain. Few ITSs, however, model the difficulty of the problems the student is solving, and those that do rarely utilize an empirical approach. Item Response Theory, a methodology applied in educational measurement, offers a way to empirically model both a student's ability and the difficulty of the problem on a common scale. This paper first describes a method for using Item Response Theory (IRT) to model the difficulty of the problems (items), which allows the system to determine the level of hints that students need during problem solving. Second, the paper reports on how the method was used in the FOSS Self-assessment System, an ITS developed to accompany part of a middle school science curriculum module. Finally, the paper presents the results of a study that compared three versions of the Self-assessment System to evaluate its efficacy, which showed that the methodology holds some promise, but needs further refinement and testing.