Content Matters: An Investigation of Feedback Categories within an ITS

  • Authors:
  • G. Tanner Jackson;Arthur C. Graesser

  • Affiliations:
  • Department of Psychology, University of Memphis, Memphis, TN, USA;Department of Psychology, University of Memphis, Memphis, TN, USA

  • Venue:
  • Proceedings of the 2007 conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The primary goal of this study was to investigate the role of feedback in an intelligent tutoring system (ITS) with natural language dialogue. One core component of tutorial dialogue is feedback, which carries the primary burden of informing students of their performance. AutoTutor is an ITS with tutorial dialogue that was developed at the University of Memphis. This article addresses the effectiveness of two types of feedback (content & progress) while college students interact with AutoTutor on conceptual physics. Content feedback provides qualitative information about the domain content and its accuracy as it is covered in a tutoring session. Progress feedback is a quantitative assessment of the student's advancement through the material being covered (i.e., how far the student has come and how much farther they have to go). A factorial design was used that manipulated the presence or absence of both feedback categories (content & progress). Each student interacted with one of four different versions of AutoTutor that varied the type of feedback. Data analyses showed significant effects of feedback on learning and motivational measures, supporting the notion that “content matters” and the adage “no pain, no gain.”