Principles of data mining
Evaluating tutors that listen: an overview of project LISTEN
Smart machines in education
Student Modelling Based on Belief Networks
International Journal of Artificial Intelligence in Education
Using Knowledge Tracing in a Noisy Environment to Measure Student Reading Proficiencies
International Journal of Artificial Intelligence in Education
A bayes net toolkit for student modeling in intelligent tutoring systems
ITS'06 Proceedings of the 8th international conference on Intelligent Tutoring Systems
A Case Study Empirical Comparison of Three Methods to Evaluate Tutorial Behaviors
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
Does Help Help? Introducing the Bayesian Evaluation and Assessment Methodology
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
Contextual slip and prediction of student performance after use of an intelligent tutor
UMAP'10 Proceedings of the 18th international conference on User Modeling, Adaptation, and Personalization
Modeling individualization in a bayesian networks implementation of knowledge tracing
UMAP'10 Proceedings of the 18th international conference on User Modeling, Adaptation, and Personalization
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
The fine-grained impact of gaming (?) on learning
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
International Journal of Artificial Intelligence in Education - Special issue on Best of ITS 2010
Hi-index | 0.00 |
In this paper we show how model identifiabilityis an issue for student modeling: observed student performance corresponds to an infinite family of possible model parameter estimates, all of which make identical predictions about student performance. However, these parameter estimates make different claims, some of which are clearly incorrect, about the student's unobservable internal knowledge. We propose methods for evaluating these models to find ones that are more plausible. Specifically, we present an approach using Dirichlet priors to bias model search that results in a statistically reliable improvement in predictive accuracy (AUC of 0.620 ± 0.002 vs. 0.614 ± 0.002). Furthermore, the parameters associated with this model provide more plausible estimates of student learning, and better track with known properties of students' background knowledge. The main conclusion is that prior beliefs are necessary to bias the student modeling search, and even large quantities of performance data alone are insufficient to properly estimate the model.