Machine Learning
An introduction to inductive programming
Artificial Intelligence Review
Learning generalized plans using abstract counting
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
POIROT: integrated learning of web service procedures
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
A formal framework for speedup learning from problems and solutions
Journal of Artificial Intelligence Research
A selective macro-learning algorithm and its application to the N × N sliding-tile puzzle
Journal of Artificial Intelligence Research
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
The utility of ebl in recursive domain theories
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
Pruning duplicate nodes in depth-first search
AAAI'93 Proceedings of the eleventh national conference on Artificial intelligence
Learning efficient rules by maintaining the explanation structure
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
A new representation and associated algorithms for generalized planning
Artificial Intelligence
Inductive rule learning on the knowledge level
Cognitive Systems Research
Applicability conditions for plans with loops: Computability results and algorithms
Artificial Intelligence
Hi-index | 0.00 |
In explanation-based learning, a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the graphical structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative or recursive process is implicitly represented in the explanation by a fixed number of applications. This paper presents an algorithm that generalizes explanation structures and reports empirical results that demonstrate the value of acquiring recursive and iterative concepts. The BAGGER2 algorithm learns recursive and iterative concepts, integrates results from multiple examples, and extracts useful subconcepts during generalization. On problems where learning a recursive rule is not appropriate, the system produces the same result as standard explanation-based methods. Applying the learned recursive rules only requires a minor extension to a PROLOG-like problem solver, namely, the ability to explicitly call a specific rule. Empirical studies demonstrate that generalizing the structure of explanations helps avoid the recently reported negative effects of learning.