Continuous case-based reasoning
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Using Reinforcement Learning for Similarity Assessment in Case-Based Systems
IEEE Intelligent Systems
An Analysis of Case-Based Value Function Approximation by Approximating State Transition Graphs
ICCBR '07 Proceedings of the 7th international conference on Case-Based Reasoning: Case-Based Reasoning Research and Development
Motivations as an Abstraction of Meta-level Reasoning
CEEMAS '07 Proceedings of the 5th international Central and Eastern European conference on Multi-Agent Systems and Applications V
ECCBR '08 Proceedings of the 9th European conference on Advances in Case-Based Reasoning
Effective approaches for partial satisfaction (over-subscription) planning
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Improving Reinforcement Learning by Using Case Based Heuristics
ICCBR '09 Proceedings of the 8th International Conference on Case-Based Reasoning: Case-Based Reasoning Research and Development
RETALIATE: learning winning policies in first-person shooter games
IAAI'07 Proceedings of the 19th national conference on Innovative applications of artificial intelligence - Volume 2
The virtue of reward: performance, reinforcement and discovery in case-based reasoning
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
ICCBR'10 Proceedings of the 18th international conference on Case-Based Reasoning Research and Development
Goal-Driven autonomy with case-based reasoning
ICCBR'10 Proceedings of the 18th international conference on Case-Based Reasoning Research and Development
Hi-index | 0.00 |
Goal-driven autonomy (GDA) is a reflective model of goal reasoning that controls the focus of an agent's planning activities by dynamically resolving unexpected discrepancies in the world state, which frequently arise when solving tasks in complex environments. GDA agents have performed well on such tasks by integrating methods for discrepancy recognition, explanation, goal formulation, and goal management. However, they require substantial domain knowledge, including what constitutes a discrepancy and how to resolve it. We introduce LGDA, a learning algorithm for acquiring this knowledge, modeled as cases, that and integrates case-based reasoning and reinforcement learning methods. We assess its utility on tasks from a complex video game environment. We claim that, for these tasks, LGDA can significantly outperform its ablations. Our evaluation provides evidence to support this claim. LGDA exemplifies a feasible design methodology for deployable GDA agents.