YALE: rapid prototyping for complex data mining tasks
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Developing a generalizable detector of when students game the system
User Modeling and User-Adapted Interaction
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
Addressing the assessment challenge with an online system that tutors as it assesses
User Modeling and User-Adapted Interaction
The ASSISTment Builder: Supporting the Life Cycle of Tutoring System Content Creation
IEEE Transactions on Learning Technologies
Detecting the Learning Value of Items In a Randomized Problem Set
Proceedings of the 2009 conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling
Using Learning Decomposition to Analyze Instructional Effectiveness in the ASSISTment System
Proceedings of the 2009 conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
ITS'06 Proceedings of the 8th international conference on Intelligent Tutoring Systems
The cognitive tutor authoring tools (CTAT): preliminary evaluation of efficiency gains
ITS'06 Proceedings of the 8th international conference on Intelligent Tutoring Systems
Detecting learning moment-by-moment
International Journal of Artificial Intelligence in Education - Special issue on Best of ITS 2010
Learning what works in its from non-traditional randomized controlled trial data
International Journal of Artificial Intelligence in Education - Special issue on Best of ITS 2010
Hi-index | 0.00 |
In recent years, it has become clear that educational data mining methods can play a positive role in refining the content of intelligent tutoring systems. In particular, efforts to determine which content is more and less effective at promoting learning can help improve tutoring systems by identifying ineffective content and cycling it out of the system. Analysis of the learning value of content can also help teachers and system designers create better content by taking notice of what has and has not worked in the past. Past work has looked solely at student response data in doing this type of analysis; we extend this work by instead utilizing the moment-by-moment learning model, P(J). This model uses parameters learned from Bayesian Knowledge Tracing as well as other features extracted from log data to compute the probability that a student learned a skill at a specific problem step. By averaging P(J) values for a particular item across students, and comparing items using statistical testing with post-hoc controls, we can investigate which items typically produce more and less learning. We use this analysis to evaluate items within twenty problem sets completed by students using the ASSISTments Platform, and show how item learning results can be obtained and interpreted from this analysis.