Advice and planning in chess endgames
Proc. of the international NATO symposium on Artificial and human intelligence
Learning in intractable domains
Machine learning: a guide to current research
Learning appropriate abstractions for planning in formation problems
Proceedings of the sixth international workshop on Machine learning
Explanation-Based Generalization: A Unifying View
Machine Learning
Explanation-Based Learning: An Alternative View
Machine Learning
Chunking as an abstraction mechanism
Chunking as an abstraction mechanism
Adaptive search by explanation-based learning of heuristic censors
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
Integrating abstraction and explanation-based learning in PRODIGY
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
Adaptive pattern-oriented chess
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
AAAI'93 Proceedings of the eleventh national conference on Artificial intelligence
Hi-index | 0.00 |
Explanation-Based Learning (EBL) depends on the ability of a system to explain to itself, based on the domain theory, that a given training example is a member of the target concept. However, in many complex domains it is often intractable to do this. In this paper I introduce a learning technique called Lazy Explanation-Based Learning as a solution to the problem of intractable explanation process in EBL. This technique is based on the idea that when the domain theory is intractable, it is possible to learn by generalizing incomplete explanations and incrementally refining the over-general knowledge thus learned when met with unexpected plan failures. I describe a program that incrementally learns planning knowledge in game domains through Lazy Explanation-Based Learning. I present both empirical and theoretical evidence for the viability of Lazy Explanation-Based Learning.