Modeling Students' Natural Language Explanations

  • Authors:
  • Albert Corbett;Angela Wagner;Sharon Lesgold;Harry Ulrich;Scott Stevens

  • Affiliations:
  • Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA;Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA;Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA;Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA;Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA

  • Venue:
  • UM '07 Proceedings of the 11th international conference on User Modeling
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Intelligent tutoring systems have achieved demonstrable success in supporting formal problem solving. More recently such systems have begun incorporating student explanations of problem solutions. Typically, these natural language explanations are entered with menus, but some ITSs accept open-ended typed inputs. Typed inputs require more work by both developers and students and evaluations of the added value for learning outcomes has been mixed. This paper examines whether typed input can yield more accurate student modeling than menu-based input. This paper examines the application of Knowledge Tracing student modeling to natural language inputs and examines the standard Knowledge Tracing definition of errors. The analyses indicate that typed explanations can yield more predictive models of student test performance than menu-based explanations and that focusing on semantic errors can further improve predictive accuracy.