Empirical Evaluation of User Models and User-Adapted Systems
User Modeling and User-Adapted Interaction
Practical Reinforcement Learning in Continuous Spaces
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Advisor: a machine-learning architecture for intelligent tutor construction
Advisor: a machine-learning architecture for intelligent tutor construction
User Modeling and User-Adapted Interaction
International Journal of Artificial Intelligence in Education - Special issue on Best of ITS 2010
Learning classifier system with average reward reinforcement learning
Knowledge-Based Systems
An efficient L2-norm regularized least-squares temporal difference learning algorithm
Knowledge-Based Systems
Hi-index | 0.00 |
In an adaptive and intelligent educational system (AIES), the process of learning pedagogical policies according the students needs fits as a Reinforcement Learning (RL) problem. Previous works have demonstrated that a great amount of experience is needed in order for the system to learn to teach properly, so applying RL to the AIES from scratch is unfeasible. Other works have previously demonstrated in a theoretical way that seeding the AIES with an initial value function learned with simulated students reduce the experience required to learn an accurate pedagogical policy. In this paper we present empirical results demonstrating that a value function learned with simulated students can provide the AIES with a very accurate initial pedagogical policy. The evaluation is based on the interaction of more than 70 Computer Science undergraduate students, and demonstrates that an efficient and useful guide through the contents of the educational system is obtained.