Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Toward Automatic Hint Generation for Logic Proof Tutoring Using Historical Student Data
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
Does Help Help? Introducing the Bayesian Evaluation and Assessment Methodology
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
Using Knowledge Discovery Techniques to Support Tutoring in an Ill-Defined Domain
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
Towards an Intelligent Tutoring System for Propositional Proof Construction
Proceedings of the 2008 conference on Current Issues in Computing and Philosophy
I learn from you, you learn from me: How to make iList learn from students
Proceedings of the 2009 conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling
A comparative analysis of cognitive tutoring and constraint-based modeling
UM'03 Proceedings of the 9th international conference on User modeling
Enhancing the automatic generation of hints with expert seeding
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part II
Hints: is it better to give or wait to be asked?
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part I
Experimental Evaluation of Automatic Hint Generation for a Logic Tutor
International Journal of Artificial Intelligence in Education - Best of AIED 2011
Hi-index | 0.00 |
In our prior work we showed it was feasible to augment a logic tutor with a data-driven Hint Factory that uses data to automatically generate context-specific hints for an existing computer aided instructional tool. Here we investigate the impact of automatically generated hints on educational outcomes in a robust experiment that shows that hints help students persist in deductive logic courses. Three instructors taught two semester-long courses, each teaching one semester using a logic tutor with hints, and one semester using the tutor without hints, controlling for the impact of different instructors on course outcomes. Our results show that students in the courses using a logic tutor augmented with automatically generated hints attempted and completed significantly more logic proof problems, were less likely to abandon the tutor, and performed significantly better on a post-test implemented within the tutor.