Technical Note: \cal Q-Learning
Machine Learning
Locally Weighted Learning for Control
Artificial Intelligence Review - Special issue on lazy learning
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Tree-Based Batch Mode Reinforcement Learning
The Journal of Machine Learning Research
ECML'05 Proceedings of the 16th European conference on Machine Learning
CBR for state value function approximation in reinforcement learning
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
Evaluating the effectiveness of exploration and accumulated experience in automatic case elicitation
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
ECCBR '08 Proceedings of the 9th European conference on Advances in Case-Based Reasoning
ICCBR'10 Proceedings of the 18th international conference on Case-Based Reasoning Research and Development
Integrated learning for goal-driven autonomy
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
We identify two fundamental points of utilizing CBR for an adaptive agent that tries to learn on the basis of trial and error without a model of its environment. The first link concerns the utmost efficient exploitation of experience the agent has collected by interacting within its environment, while the second relates to the acquisition and representation of a suitable behavior policy. Combining both connections, we develop a state-action value function approximation mechanism that relies on case-based, approximate transition graphs and forms the basis on which the agent improves its behavior. We evaluate our approach empirically in the context of dynamic control tasks.