Communications of the ACM
Macro-operators: a weak method for learning
Artificial Intelligence - Lecture notes in computer science 178
Quantitative results concerning the utility of explanation-based learning
Artificial Intelligence
On Learning Sets and Functions
Machine Learning
Chunking in Soar: The Anatomy of a General Learning Mechanism
Machine Learning
Explanation-Based Generalization: A Unifying View
Machine Learning
Machine Learning
Explanation-Based Learning: An Alternative View
Machine Learning
Machine Learning
Tractable learning and planning in games
Tractable learning and planning in games
Incorporating redundant learned rules: a preliminary formal analysis of EBL
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
A theory of unsupervised speedup learning
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Complexity results for serial decomposability
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
In spite of the popularity of Explanation-Based Learning (EBL), its theoretical basis is not well-understood. Using a generalization of Probably Approximately Correct (PAC) learning to problem solving domains, this paper formalizes two forms of Explanation-Based Learning of macro-operators and proves the sufficient conditions for their success. These two forms of EBL, called "Macro Caching" and "Serial Parsing," respectively exhibit two distinct sources of power or "bias": the sparseness of the solution space and the decomposability of the problem-space. The analysis shows that exponential speedup can be achieved when either of these biases is suitable for a domain. Somewhat surprisingly, it also shows that computing the preconditions of the macro-operators is not necessary to obtain these speedups. The the oretical results are confirmed by experiments in the domain of Eight Puzzle. Our work suggests that the best way to address the utility problem in EBL is to implement a bias which exploits the problem-space structure of the set of domains that one is interested in learning.