Explanation-based learning: a survey of programs and perspectives
ACM Computing Surveys (CSUR)
Learning to extract information from text based on user-provided examples
CIKM '96 Proceedings of the fifth international conference on Information and knowledge management
Recognizing a Program's Design: A Graph-Parsing Approach
IEEE Software
Journal of Artificial Intelligence Research
Integrating inductive neural network learning and explanation-based learning
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 2
Refinement of approximate domain theories by knowledge-based neural networks
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
Explanation-based learning depends on having an explanation on which to base generalization. Thus, a system with an incomplete or intractable domain theory cannot use this method to learn from every precedent. However, in such cases the system need not resort to purely empirical generalization methods, because it may already know almost everything required to explain the precedent. Learning by failing to explain is a method that uses current knowledge to prune the well-understood portions of complex precedents (and rules) so that what remains may be conjectured as a new rule. This paper describes precedent analysis, partial explanation of a precedent (or rule) to isolate the new technique(s) it embodies, and rule reanalysis, which involves analyzing old rules in terms of new rules to obtain a more general set. The algorithms PA, PA-RR, and PA-RR-GW implement these ideas in the domains of digital circuit design and simplified gear design.