Machine Learning
Evaluating tutors that listen: an overview of project LISTEN
Smart machines in education
Addressing the testing challenge with a web-based e-assessment system that tutors as it assesses
Proceedings of the 15th international conference on World Wide Web
Using Knowledge Tracing in a Noisy Environment to Measure Student Reading Proficiencies
International Journal of Artificial Intelligence in Education
Do Performance Goals Lead Students to Game the System?
Proceedings of the 2005 conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology
Blending Assessment and Instructional Assisting
Proceedings of the 2005 conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology
What Level of Tutor Interaction is Best?
Proceedings of the 2007 conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work
Detection and analysis of off-task gaming behavior in intelligent tutoring systems
ITS'06 Proceedings of the 8th international conference on Intelligent Tutoring Systems
Scaffolding vs. hints in the assistment system
ITS'06 Proceedings of the 8th international conference on Intelligent Tutoring Systems
ICLS '10 Proceedings of the 9th International Conference of the Learning Sciences - Volume 2
Clustering students to generate an ensemble to improve standard test score predictions
AIED'11 Proceedings of the 15th international conference on Artificial intelligence in education
Plan recognition in exploratory domains
Artificial Intelligence
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part II
Evaluating the performance of a diagnosis system in school algebra
ICWL'11 Proceedings of the 10th international conference on Advances in Web-Based Learning
International Journal of Artificial Intelligence in Education - Special issue on Best of ITS 2010
Learning what works in its from non-traditional randomized controlled trial data
International Journal of Artificial Intelligence in Education - Special issue on Best of ITS 2010
Content learning analysis using the moment-by-moment learning detector
ITS'12 Proceedings of the 11th international conference on Intelligent Tutoring Systems
User Modeling and User-Adapted Interaction
Proceedings of the Third International Conference on Learning Analytics and Knowledge
An empirical study on the quantitative notion of task difficulty
Expert Systems with Applications: An International Journal
Clustering student skill set profiles in a unit hypercube using mixtures of multivariate betas
Advances in Data Analysis and Classification
Hi-index | 0.00 |
Secondary teachers across the United States are being asked to use formative assessment data (Black and Wiliam 1998a,b; Roediger and Karpicke 2006) to inform their classroom instruction. At the same time, critics of US government's No Child Left Behind legislation are calling the bill "No Child Left Untested". Among other things, critics point out that every hour spent assessing students is an hour lost from instruction. But, does it have to be? What if we better integrated assessment into classroom instruction and allowed students to learn during the test? We developed an approach that provides immediate tutoring on practice assessment items that students cannot solve on their own. Our hypothesis is that we can achieve more accurate assessment by not only using data on whether students get test items right or wrong, but by also using data on the effort required for students to solve a test item with instructional assistance. We have integrated assistance and assessment in the ASSISTment system. The system helps teachers make better use of their time by offering instruction to students while providing a more detailed evaluation of student abilities to the teachers, which is impossible under current approaches. Our approach for assessing student math proficiency is to use data that our system collects through its interactions with students to estimate their performance on an end-of-year high stakes state test. Our results show that we can do a reliably better job predicting student end-of-year exam scores by leveraging the interaction data, and the model based on only the interaction information makes better predictions than the traditional assessment model that uses only information about correctness on the test items.